Probes, Prototypes and Pilot projects
I’ve been working in the world of program development with a lot of complexity and innovation and co-creation lately and have seen these three terms used sometimes interchangeably to describe a strategic move. As a result, I’ve been adopting a more disciplined approach to these three kinds of activities.
First some definitions.
Taken explicitly from Cynefin, a probe is an activity that teaches you about the context that you are working with. The actual outcome of the probe doesn’t matter much because the point is to create an intervention of some kind and see how your context responds. You learn about the context and that helps you make better bets as you move forward – “more stories like this, less stories like this” to quote Dave Snowden. Probes are small, safe to fail and easily observed. They help to test different and conflicting hypotheses about the context. If 8 out of 10 of your probes are not failing, you aren’t learning much about the limits of your context. Probes are actually methods of developmental evaluation.
A prototype is an activity that is designed to give you an idea of how a concept might work in reality. Prototypes are designs that are implemented for a short time, adjusted through a few iterations and improved upon. The purpose of a prototype is to put something into play and look at its performance. You need to have some success with a prototype in order to know what parts of it are worth building upon. Prototypes straddle the world of “safe to fail” and fail safe. They are both developmental evaluations tools and they also require some level of summative evaluation in order to be fully understood. Prototypes are also probes, and you can learn a lot about the system from how they work.
A pilot is a project designed to prove the worthiness of an approach or a solution. You need it to have an actual positive effect in its outcomes, and it’s less safe to fail. Pilots are often designed to achieve success, which is a good approach if you have studied the context with a set of probes and maybe prototyped an approach or two. Without good intelligence about the context you are working with, pilots are often shown to work by manipulating the results. A pilot project will run for a discrete amount of time and will then be summatively evaluated in order to determine its efficacy. If it shows promise, it may be repeated, although there is always a danger of creating a “best practice” that does not translate across different contexts. If a pilot project is done well and works, it should be integrated with the basic operating procedure of an organization, and tinkered with over time, until it starts showing signs of weakened effectiveness. From then on, it can become a program. And pilots are alos probes, and as you work with them they too will tell you a lot about what is possible in the system.
The distinctions between these three things are quite important. Often change is championed in the non-profit word with the funding of pilot projects, the design of which is based on hunches and guesses about what works, or worse, a set of social science research data that is merely one of many possible hypotheses, privileged only by the intensity of effort that went into the study. We see this all the time with needs assessments, gap analyses and SWOT-type environmental scans.
Rather than thinking of these as gradients on a line though, I have been thinking of them as a nested set of circles:
Each one contains elements of the one within it. Developing one will be better if have based your development on the levels below it. When you are confronted with complexity and several different ideas of how to move forward, run a set of probes to explore those ideas. When you have an informed hunch, start prototyping to see what you can learn about interventions. What you learn from those can be put to use as pilots to eventually become standard programs.
By far, the most important mindshift in this whole area is adopting the right thinking about probes. Because pilot projects and even prototyping is common in the social development world, we tend to rely on these methods as ways of innovating. And we tend to design them from an outcomes basis, looking to game the results towards positive outcomes. I have seen very few pilot projects “fail” even if they have not been renewed or funded. Working with probes turns this approach inside out. We seek to explore failure so we can learn about the tolerances and the landscape of the system we are working in. We “probe” around these fail points to see what we can learn about the context of our work. When we learn something positive we design things to take advantage of this moment. We deliberately do things to test hypotheses and, if you’re really good and you are in a safe-to-fail position, you can even try to create failures to see how they work. That way you can identify weak signals of failure and notice them when you see them so that when you come to design prototypes and pilots, you “know when to hold ‘em and know when to fold ‘em.”
Good play on these Chris. It’s the probing that really caught my attention, the ones that test the edges. Thanks for stirring.