Evaluation and monitoring
Art of Harvesting, Complexity, Emergence, Evaluation, Facilitation, Featured
Regular readers will know that I’ve been thinking a lot about evaluation for many years now. I am not an evaluator, but almost every project I am involved in contains some element of evaluation. Sometimes this evaluation is well done, well thought through and effective and other times (the worst of times, more often than you think) the well thought through evaluation plan crumbles in the face of the HIPPO – the Highest Paid Person’s Opinion. So how do we really know what is going on?
When I stumbled across Michael Quinn Patton’s work in Developmental Evaluation, a whole bunch of new doors opened up to me. I was able to see the crude boundaries of traditional evaluation methods very clearly and was able to see that most of the work I do in the world – facilitating strategic conversations – was actually a core practice of developmental evaluation. Crudely put, traditional “merit and worth” evaluation methods work well when you have a knowable and ordered system where the actual execution can be evaluated against a set of ideal causes that lead to an ideal state. Did we build the bridge? Does it work according to the specifications of the project? Was it a good use of money? All of that can be evaluated summatively.
In the unordered systems where complexity and emergence is at play, summative evaluation cannot work at all. The problem with complex systems is that you cannot know what set of actions will lead to the result you need to get to, so evaluating efforts against an ideal state is impossible. Well, it’s POSSIBLE, but what happens is that the evaluator brings her judgements to the situation. Complex problems (or more precisely, emergent problems generated from complex systems) cannot be solved, per se. While it is possible to build a bridge, it is not possible to create a violence free society. Violent societies are emergent.
So that’s the back story. Last December I went to London to do a deep dive into how the Cynefin framework and Cognitive Edge’s work in general can inform a more sophisticated practice of developmental evaluation. After a few months of thinking about it and being in conversation with several Cognitive Edge practitioners including Ray MacNeil in Nova Scotia, I think that my problem is that that term “evaluation” can’t actually make the jump to understanding action in complex systems. Ray and I agreed that Quinn Patton’s work on Developmental Evaluation is a great departure point to inviting people to leave behind what they usually think of as evaluation and to enter into the capacities that are needed in complexity. These capacities include addressing problems obliquely rather than head on, making small safe to fail experiments, undertaking action to better understand the system rather than to effect a change, practicing true adaptive leadership which means practicing anticipatory awareness and not predictive planning, working with patterns and sense-making as you go rather than rules and accountabilities, and so on.
Last night a little twitter exchange between myself, Viv McWaters and Dave Snowden based on Dave’s recent post compelled me to explore this a bit further. What grabbed me was especially this line: “The minute we evaluate, assess, judge, interpret or whatever we start to reduce what we scan. The more we can hold open a description the more we scan, the more possibility of seeing novel solutions or interesting features.”
What is needed in this practice is monitoring. You need to monitor the system in all kinds of different ways and monitor yourself, because in a complex system you are part of it. Monitoring is a fine art, and requires us to pay attention to story, patterns, finely grained events and simple numbers that are used to measure things rather than to be targets. Monitoring temperatures helps us to understand climate change, but we don’t use temperatures as targets. Nor should we equate large scale climate change with fine grained indicators like temperature.
Action in complex systems is a never ending art of responding to the changing context. This requires us to be adopting more sophisticated monitoring tools and using individual and distributed cognition to make enough sense of things to move, all the while watching what happens when you do move. It is possible to understand retrospectively what you have done, and that is fine as long as you don’t confuse what you learn by doing that with the urge to turn it into a strategic plan going forward.
What role can “evaluation” have when your learning about the past cannot be applied to the future?
For technical problems in ordered systems, evaluation is of course important and correct. Expert judgement is required to build safe bridges, to fix broken water mains, to do the books, audit banks and get food to those who need it. But in complex systems – economies, families, communities and democracies, I’m beginning to think that we need to stop using the word evaluation and really start adopting new language like monitoring and sense-making.
This resonates. And from the method paper on safe-to-fail probes: ?”The experiments are then reviewed for common elements and resourced along with set up of monitoring and review processes.” 🙂
Yes. Monitoring is critical. But instead of evaluating the results you have the system make sense of the results. Which is a vey very different strategy. The problem now, as always, is how to help groups understand action and results Ina complex and always uncertain context.
Thanks Chris. Good summary. I’d add my two cents worth: evaluation is BIG business. Lots of people make lots of money out of ‘traditional’ summative evaluations that are ‘done’ to others. I have little or no interest in this apart from noticing it. Before developmental evaluation, there was user-focused evaluation and this is where I have been most influenced. The real power in evaluations, lie not in accountability, but in learning – and learning by the people who are doing the work – not the funders, not the regulators, not the policy makers. They may benefit if they’re wise enough to notice, but in my experience they are way too focused on accountability – using your example, was the bridge built to specifications, on time, on budget, and does it work as a bridge? What interests me more is the experience of the people – what did they plan to do, what had to change and why, how did they adapt, what did they learn about the project along the way, how would they do it differently next time? This is where facilitation practices can be very useful – enabling people involved in any project, during the life of the project, to notice what is happening, collect information and see what themes emerge, analyse the results (both quant and qual) themselves and reflect on it. I’m not a fan of pre-determined themes. As you say, they pre-determine what you notice. Where I diverge from you is that I think that all work is complex – anything that involves people – and monitoring answers the question ‘what happened?’ while evaluation answers the question ‘why?’ I’m a fan of building the capacity of everyone to be their own evaluator, rather than relying on evaluation ‘experts’ or external evaluators.
Love it. And while think that people always can make things complex, not all work is complex. Opening a door. Applying a rivet. Entering figures in a ledger. These tasks are important to evaluate correctly even though they are part of a bigger system. In fact some work is so simple that it can be automated and then it really does need to be monitored and evaluated.
I have heard the line that “everything involving people is complex” but unthinking at is a matter of scale. Yes the universe is self organizing. No, my living room does not paint itself.
Happy, happy with this!!!
I checked Developmental Evaluation some years ago, and was pleased with it at that time; but then didn’t do much with it…
Yes to ‘making sense of the results’… collectively!
I will have to read this again and again… because it is a different language than what I use normally, but for some clients this might be better!
My challenge is finding the good language. Some of it is philosophical and some of it is technical jargon. It’s important that we use philosophical terms correctly and make jargon simple to understand. It’s all quite new to me so I’m trying it on a lot and seeing how it fits. So far it really helps me understand my experience even if it doesn’t yet clearly describe a need to a client.
I like the way you have drawn this out to your conclusion.
Has Dave responded to this ?
Not yet. He’s in Wales, according to his blog, so he’s probably a little misty eyed at the moment.