Disintermediated sensemaking
Art of Harvesting, Art of Hosting, Complexity, Evaluation, Facilitation, World Cafe
When I popped off to London last week to take a deep dive into Cognitive Edge’s work with complexity, one of the questions I held was about working with evaluation in the complex domain.
The context for this question stems from a couple of realities. First, evaluation of social programs, social innovation and other interventions in the human services is a huge industry and it holds great sway. And it is dominated by a world view of linear rationalism that says that we can learn something by determining whether or not you achieved the goals that you set out to achieve. Second, evaluation is an incredibly privileged part of many projects and initiatives and itself becomes a strange attractor for project planning and funding approval. In order for funders to show others that their funding is making a difference, they need a “merit and worth” evaluation of their funds. The only way to do that is to gauge progress against expected results. And no non-profit in its right mind will say “we failed to achieve the goals we set out to address” even though everyone knows that “creating safe communities” for example is an aspiration out of the control of any social institution and is subject to global economic trends as much as it is subject to discrete interventions undertaken by specific projects. The fact that folks working in human services are working in a complex domain means that we can all engage in a conspiracy of false causality in order to keep the money flowing (an observation Van Jones inspired in me a while ago.) Lots of folks are making change, because they know intuitively how to do this, but they way we learn about that change is so tied to an inappropriate knowledge system, that I’m not convinced we have much of an idea what works and what doesn’t. And I’m not talking about articulating “best practices.”
The evaluation methods that are used are great in the complicated domain, where causes and effects are easy to determine and where understanding critical pathways to solutions can have a positive influence on process. in other words, where you have replicable results, linear, summative evaluation works great. Where you have a system that is complex, where there are many dynamics working at many different scales to produce the problems you are facing, an entirely different way of knowing is needed. As Dave Snowden says, there is an intimate connection between ontology, epistemology and phenomenology. In plain terms, the kind of system we are in is connected to the ways of knowing about it and the ways of interpreting that knowledge.
I’m going to make this overly simplistic: If you are working with a machine, or a mechanistic process, that unfolds along a linear trajectory, than mechanistic knowledge (problems solving) and interpretive stratgies are fantastic. For complex systems, we need knowledge that is produced FROM the system and interpreted within the system. Evaluation that is done by people “outside” of the system and that reports finding filtered through “expert” or “disinterested” lenses is not useful for a system to understand itself.
Going into the Cynefin course I was interested to learn about how developmental evaluation fit into the complex domain. What I learned was the term “disintermediated sensemaking” which is actually the radical shift I was looking for. Here is an example of what it looks like in leadership practice.
Most evaluation uses processes employing a specialized evaluator undertaking the work. The problem with this is that it places a person between the data and experience and the use of the knowledge. And it also increases the time between an experience and the meaning making of that experience, which can be a fatal lag with strategy in emergent systems. The answer to this problem is to let people in the system have direct experience of the data, and make sense of it themselves.
There are many many ways to do this, depending on what you are doing. For example:
- When clustering ideas, have the group do it. When only a few people come forward, let them start and then break them up and let others continue. Avoid premature convergence.
- When people are creating data, let them tag what it means, for example, in the decision making process we used last weekend, participants tagged their thoughts with numbers, and tagged their numbers with thoughts, which meant that they ordered their own data.
- Produce knowledge at a scale you can do something about. A system needs to be able to produce knowledge at a scale that is usable, and only the system can determine this scale. I see many strategic plans for organizations that state things like “In order to create safe communities for children we must create a system of safe and nurturing foster homes.” The job of creating safe foster homes falls into the scope of the plan, but tying that to any bigger dynamics gets us into the problem of trying to focus our work on making an impact we have no ability to influence.
- Be really clear about the data you want people to produce and have a strategy for how they will make sense of it. World Cafe processes for example, often produce scads of data on table cloths at the centre of the table, but there is often so little context for this information that it is hard to make use of. My practice these days is to invite people to use the table cloths as scratch pads, and to collect important data on post it notes or forms that the group can work with. AND to do that in a way that allows people to be tagging and coding the data themselves, so that we don’t have to have someone else figure out what they meant.
- Have leaders and teams pour over the raw data and the signification frameworks that people have used and translate it into strategy.
These just begin to scratch the surface of this inquiry in practice. Over the next little while I’m going to be giving this approach a lot of thought and try it out in practice as often as I can, and where the context warrants it.
If you would like to try an exercise to see why this matters try this. the next time you are facilitating a brainstorm session, have the group record dozens of insights on post its and place them randomly on a wall. Take a break and look over the post its. Without touching the post its, start categorizing them and record your categorization scheme. Then invite the group to have a go at it. Make sure everyone gets a chance to participate. Compare your two categorization schemes and discuss the differences. Discuss what might happen if the group were to follow the strategy implicit in your scheme vs. the strategy implicit in their scheme.
Thanks for sharing your thoughts on this. I’ve been doing a lot of thinking lately on how the cynefin model can help guide us in beginning to think about better fits in evaluation approaches for the many “places of belonging” we find ourselves in as we work within the space of wicked questions. A few years back at an Illinois AoH retreat you introduced me to developmental evaluation (thanks!) and I believe it holds so much promise for us, but I like the cynefin model as a way to help us work with groups in thinking about some of the conditions for consideration on the contexts we work within and where DE is an appropriate approach. I also like using the divergence-emergence-convergence model as a way to help groups understand how DE moves within and informs process in ways other approaches to evaluation fall short in innovative and dynamic environments. Regardless, there seems to be so much more promise for us in approaching evaluation as a process of inquiry rather than one of monitoring and “gotcha”. Thanks for adding to my thinking on this!
Great observations here April. My learning about this is that we need to be learning and sensemaking at scales that are useful. In a complex project that includes many different scales and iterative cycles ranging from “right now in this very meeting” all the way through to “did we change the world?”
the diverge/converge model is very interesting and what is critical about that is recognizing the timing of things. Kaner et. al, and Snowden talk about avoiding premature convergence and that is perhaps one of the most overlooked aspects of the model. But for sensemaking it is very important to keep things open and bubbling.