Chris Corrigan Chris Corrigan Menu
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me

Category Archives "Evaluation"

Struggling to pick up the trash in the face of weaponized evaluation

October 7, 2018 By Chris Corrigan Democracy, Evaluation, Featured 5 Comments

Most of my work lies with the organizations of what Henry Mintzberg calls “The plural sector.” These are the organizations tasked with picking up the work that governments and corporations refuse to do. As we have sunk further and further into the 40 year experiment of neo-liberalism, governments have abandoned the space of care for communities and citizens especially if that care clashes with an ideology of reducing taxes to favour the wealthy and the largely global corporate sector. Likewise on the corporate side a singular focus on shareholder return and the pursuit of capital friendly jurisdictions with low tax rates and low wages means that corporations can reap economic benefits without any responsibility for the social effects of their policy influence.

Here’s how Mintzberg puts it, in a passionate defence of the role of these organizations:

“We can hardly expect governments—even ostensibly democratic ones—that have been coopted by their private sectors or overwhelmed by the forces of corporate globalization to take the lead in initiating radical renewal. A sequence of failed conferences on global warming has made this quite clear.

Nor can private sector businesses be expected to take the lead. Why should they promote changes to redress an imbalance that favors so many of them, especially the most powerful? And although corporate social responsibility is certainly to be welcomed, anyone who believes that it will compensate for corporate social irresponsibility is not reading today’s newspapers.”

What constantly surprises me in this work is how much accountability is placed on the plural sector for achieving outcomes around issues that they have so little role in creating.

While corporations are able to simply externalize effects of their operations that are relevant to their KPIs and balance sheets, governments are increasingly held to account by citizens for failing to make significant change with ever reduced resources and regulatory influence. Strident anti-government governments are elected and they immediately set out to dismantle what is left of the government’s role, peddling platitudes such as “taxation is theft” and associated libertarian nonsense. They generally, and irresponsibly, claim that the market is the better mechanism to solve social problems even though the market has been shown to be a psychotic beast hell bent on destroying local communities, families and the climate in pursuit of it’s narrowly focused agenda. In the forty years since Regan, Thatcher and Mulroney went to war against government, the market has failed on nearly every score to create secure economic and environmental futures for all peoples. And it has utterly stripped entire nations of wealth and resources causing their people to flee the ensuing wars, depressions, and environmental destruction. Migrants run headlong into the very countries that displaced them in the first place and meet there a hostile resistance to the newcomers. Xenophobia and racism gets channeled into policy and simply increases the rate of exploitation and wealth concentration.

And yet, the people I know who struggle under the most pressure to prove their worth are the organizations of the plural sector who are subject to onerous and ontologically incorrect evaluation criteria aimed at, presumably, assuring their founders that the rabble are not only responsibly spending money (which is totally understsndsble) but also making a powerful impact on issues which are driven by forces well outside their control.

I’m increasingly understanding the role of a great deal of superficial evaluation in actually restricting the effectiveness of the plural sector so that they may be relegated to harm reduction for capitalism, rather than pursuing the radical reforms to our global economic system that will lead to sustainability. It’s fristrating for so many on the frontlines and it has led for calls for much more unrestricted granting in order to allow organizations to effectively allocate their resources, respond to emerging patterns, and learn from their work.

There are some fabulous people working in the field of evaluation to try to disrupt this dynamic by developing robust methods of complexity informed research in support of what the front line of the plural sector is tasked with. The battle now, especially now that science itself is under attack, is to make these research methods widely understood and effective in not simply evaluating the work of the plural sector but also shunting a light on the clear patterns at play in our economic system.

I’ll be running an online course in the winter with Beehive Productions where we look at evaluation from the perspective of facilitators and leaders of social change. We won’t shy away from this conversation as we look at where evaluation practice has extended beyond the narrow confines of program improvement and into larger social conversation. We will look at history and power and how evaluation is weaponized against radical reform in favour of, at best, sustaining good programs and at worst shutting down effective work.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

The limits of certainty

September 28, 2018 By Chris Corrigan Complexity, Evaluation, Featured

An interesting review essay by John Quiggan looks at a new book by Ellen Broad called Made by Humans: The Ai Condition. Quiggan is intrigued by Broad’s documentation of the way algorithms have changed over the years, from originating as “a well-defined formal procedure for deriving a verifiable solution to a mathematical problem” to becoming a formula for predicting unknown and unknowable futures.  Math problems that benefit from algorithms fall firmly in the Ordered domains of Cynefin. But the problems that AI is now be deployed upon are complex and emergent in nature, and therefore instead of producing certainty and replicability, AI is being asked to provide probabilistic forecasts of the future.

For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.


As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.


Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed. A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.


Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.
A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.


Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.


The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.


Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.

In general I think that scientists understand the limits of this approach to modelling, and that was borne out in several discussions that I had with ecologists last week in Quebec. We do have to define what we mean by “prediction” though. Potential futures can be predicated with some probability if you understand the nature of the system, but exact outcomes cannot be predicted. However, we (by whom I mean the electorate and policy makers who work to make single decisions out of forecasts) do tend to venerate predictive technologies because we cling to the original definition of an algorithm, and we can come to believe that the model’s robustness is enough to guarantee the accuracy of a prediction.  We end up trusting forecasts without understanding probability, and when things don’t go according to plan, we blame the forecasters rather than our own complexity illiteracy. 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Principles focused evaluation and racial equity

June 4, 2018 By Chris Corrigan Collaboration, Community, Evaluation, Featured, First Nations 8 Comments

I was happy to be able to spend a short time this week at a gathering of Art of Hosting practitioners in Columbus, Ohio. People had gathered from across North America and further afield to discuss issues of racial equity in hosting and harvesting practices. I’ve been called back home early to deal with a broken pipe and a small flood in my house, but before I left I was beginning to think about how to apply what I was learning with respect to strategy and evaluation practices. I was going to host a conversation about this, but instead, I have a 12 hour journey to think with my fingers.

My own thinking on this topic has largely been informed by the work I’ve done over thirty years at the intersection between indigenous and non-indigenous communities and people in Canada. Recently this work has been influenced by the national conversation on reconciliation. That conversation, which started promisingly, has been treated with more and more cynicism by indigenous people, who are watching non-indigenous Canadians pat themselves on the back for small efforts while large issues of social, economic and political justice have gone begging for attention. Reconciliation is gradually losing its ability to inspire transformative action. And people are forgetting the very important work of truth coming before reconciliation. Truth is hard to hear. Reconciliation is easy to intend.

As a result, I’m beginning to suggest to some non-indigenous groups that they should not think of their work as attempting to get to reconciliation, but instead to focus on work with indigenous communities that has a real and tangible and material impact on indigenous people. Reconciliation can then a by-product and a way of evaluating the work while we work together to achieve positive effects.

So my question now is, what if reconciliation was one of the ways we evaluated work done with indigenous communities, and not as an end in itself?

x x x

“Every action happens within a frame and the frame is very important.”

— Maurice Stevens, on Sunday prefacing a story he told about race.

Evaluation is a very powerful tool because it is often a hidden frame that guides strategic work. Ethical evaluators work hard to prevent their work from becoming an intervention that determines the direction of a project. In work that involves social change, poorly designed evaluation can narrow the work to a few isolated outcomes, and leave people with the impression that complex problems can only be addressed by linear and predictable planning practices.

Wielded unconsciously, evaluation can be a colonizing tool introducing ways of knowing that are alien to the cultures of the communities that are doing the work. Sometimes called “epistemic violence” this kind of intervention devalues and erases the ways participants themselves make sense of their world, know about their work and the standards by which they value an action as good.

Complexity demands of us that we work towards an unknowable and unpredictable future in a direction that we agree is good, useful, and desirable. Agreeing together what is good and desirable for a project should be the work of the people upon whom the project will have a direct affect. The principle of “Nothing about us without us” captures this ethical imperative. In complex adaptive systems and problems, outcomes are impossible to predict and the ways forward need to be discovered. Imposing a direction or a destination can have a substantial negative impact on the ability of a community to address its issues in a way that is meaningful to the community. Many projects fail because they became about achieving a good evaluation score. It is a powerful attractor in a system.

Evaluation frameworks are based on stories about how we believe change happens. I have seen many examples of these stories over the years:

  • An orderly sequence of steps will get you to your goal.
  • The people need to be changed in order for a new world to arise.
  • Leadership must go tot the mountain of enlightenment and bring down a new set of brilliant teachings to lead the people in a different direction.
  • We are feeling our way through the woods, discovering the truth as we go.
  • Life is like navigating on a storm tossed sea and our ability to get where we are going relies on our ability to understand how the ship and the weather and the ocean works.
  • If only we can put the parts together in a greater whole, then the collective impact we desire will be made.

You can probably name dozens of the archetypal stories that underlie the way you’ve made sense of projects you are involved in. But how often are these stories questioned? And what if the stories we use to frame our evaluation and ways of knowing about what’s good are based on stories that are not relevant or, worse, dangerous, in the context in which we are working?

I once sat with Jake Swamp, a well known Mohawk elder who told me a story of the numerous times that he met with the Dalai Lama. Jake said that he and the Dalai Lama often discussed peace as that was a key focus of their work, and their approaches to peace differed quite substantially. To paraphrase Jake, for the Dalai Lama, peace was attainable through individual practice and enlightenment, mainly through personal meditation. Jake offered a different view, based on the Great Law of Peace, which is the set of organizing principles for the Haudenosaunee Confederacy. In this context, individuals achieving a state of peace separate from their family and clan are dangerous to the whole. For Jake, peace is an endeavour to be worked on collectively and and in relationship and the difference for him was critical.

Imagine an evaluator then, working with the Dalai Lama’s ideas of peace and applying them to the workings of the Haudensaunee Confederacy. A de-emphasis on personal practice would get a failing grade. The story of how to achieve peace determines what the evaluator looks for and, if the evaluator was a practicing Tibetan Buddhist for example, they might not even be able to see how Haudenosaunee chiefs clan mothers, families, and communities were working on maintaining peace.

This happens all the time with evaluation practice. The stories and lenses that evaluators use determine what they see, and their intervention in the project often determines the direction of the work..

x x x

Recently several colleagues and I attended a workshop with Michael Quinn Patton who was introducing the new field of principles-focused evaluation. I got excited at this workshop, not only because Quinn Patton is an important theorist who has brought complexity thinking into the evaluation world, but also because this new approach offers some promise for how we might evaluate the principles that actively shape the way we plan, work and evaluate action.

Interventions in complex systems rely on the skillful use of constraints. If you constrain action too tightly – through rules and regulations and accountability for unknowable outcomes – you get people gaming the system, taking reductionist approaches to problems by breaking them into easily achievable chunks and generally avoiding the difficult and uncomfortable work in favour of doing what needs to be done to pass the test. It doesnot result in systemic change, but a lot of work gets done. However, if you apply constraints too loosely and offer no guideposts at all, work goes many different ways, money and energy gets stretched and the impact is diffuse, if even noticeable at all.

The answer is to guide work with principles that are flexible and yet strong enough to keep everyone moving in a desirable direction. You need a malleable riverbank, not a canal wall or a flooded field. Choose principles that will help keep you together and do good work, and evaluate the effectiveness of those principles to achieve effective means and not simply desired ends.

Quinn Patton gives a useful heuristic for developing effective principles for complexity work. These principles are remembered by the acronym GUIDE (explanations are mine):

  • GUIDING: Principles should give you a sense of direction
  • INSPIRATIONAL: Principles should inspire new action
  • USEFUL: Principles should help you make a decision when you find yourself in a new context
  • DEVELOPMENTAL: Principles should be able to evolve with time and practice to meet new contexts
  • EVALUABLE: You should be able to know whether you are following a principles or not.

Because principles focused evaluation – and I would say principles-based planning – are context dependant, one has a choice about what principles to use. If I was evaluating the Dalai Lama’s approach to peace making I might use a principle like:

The development of individual mindfulness practice twice a day is essential to peace.

If I was working with Jake perhaps we might use a principle like:

A chief must be in good relation with his clan mothers in order to deliberate in the longhouse to maintain peace.

Principles are then used to structure action so that it happens in a certain way and evaluation questions are designed to discover how well people are able to use these principles and whether they had the desired effect. Using monitoring processes, rapid feedback, story telling and reflection means that the principles themselves become the thing that is also evaluated, in addition to outcomes and other learning that goes on in a project.

The source of those principles are deeply rooted in stories and teaching from the culture that is pursuing peace and peacefulness. It is very useful for those principles to be applied within their context, but very ineffective for those principles to be applied in the other context.

And so perhaps you can see what this has to do now with reconciliation – and racial justice – as a evaluation framework and not necessarily a stated outcome. If reconciliation and racial justice is a consequence of the WAY we work together instead of an outcome we know how to get to, then we must place our focus on evaluating the principles that guide our work together, no matter what it is, so that in doing it, we increase racial equity.

It is entirely possible for settler-colonial governments to do work that benefits indigenous communities without that work contributing towards reconciliation. The federal government could choose to fund the installation and maintenance of safe running water systems in all indigenous communities, and impose that on First Nations governments, sending in their own construction crews and holding maintenance contracts without involvement of First Nations communities. The outcome of the project might be judged to be good, but doing it that way would be against several principles of reconciliation, including the principle of working in relationship. Everyone would have running water – which is desperately needed – but the cause of reconciliation might be set back. Ends and means both matter.

x x x

So this brings me to practicalities. How can we embed racial justice, equity or reconciliation in our work using the evaluation of principles?

Part of the work of racial justice and reconciliation is to work from stories and ways of knowing of groups that have been marginalized by privilege and colonization. We often work hard – but often not hard enough – to include people in the design of the participatory strategic and process work that affects their communities but it is rare in my experience that those same voices and ways of knowing are included in the evaluation of that work. If reconciliation and justice is to ALSO be an outcome of development work, then the way to create evaluation frameworks is to work with the stories of community and question the implicit narrative and value structures of the evaluators.

This can be done by, for example, having Elders and traditional storytellers share important traditional stories of justice or relationship with project participants and then convening participants in a workshop to identify the values and principles that come through the teachings in these stories. Making these principles the core around which the evaluation takes place, and including the storytellers and Elders in the evaluation of the effectiveness of those principles within the project over time, seems to me to a simple and direct way to embed the practice of racial justice and reconciliation in the work of funding and resourcing projects in indigenous communities.

I am not a professional evaluator but my interest in the field is central to the work that I do, and I have seen for years the impact that evaluation has had on the projects I have been involved in. Anything that disrupts traditional evaluation to open up frameworks to different ways of knowing holds tremendous value for undermining the hidden effects of whiteness and privilege that threads through typical social change work supported by large foundations and governments.

But from this reflection, perhaps I can offer my own cursory principles of disrupting evaluation to build more racial equity into the work I do. How about these:

  • Work with stories about justice and relationship from the communities that are most affected by the work.
  • Have members of those communities tell the stories, distill the teachings and create the principles that can be used to evaluate the means of social change work.
  • Include storytellers and wisdom keepers on the evaluation team to guide the work according to teh principles.
  • Create containers and spaces for people of privilege to be stretched and challenged to stay in the work despite discomfort, unfamiliarity and uncertainty. As my friend Tuesday Ryan-Hart says, “relationship is the result.”

I’ll stop there for now and invite you to digest this thinking. If you are willing to offer feedback on this, I’m willing to hear it.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Evaluation rigour for harvesting

July 10, 2017 By Chris Corrigan Art of Harvesting, Art of Hosting, Collaboration, Evaluation, Featured, Learning 3 Comments

We are embarking on a innovative approach to a social problem and we need a framework to guide the evaluation process. As it is a complex challenge, we’re beginning with a developmental evaluation framework. To begin creating that,I was at work for most of the morning putting together a meta-framework, consisting of questions our core team needs to answer.  In Art of Hosting terms, we might call this a harvesting plan.

For me, when working in the space of developmental evaluation, Michael Quinn Patton is the guy whose work guides mine.  This morning I used his eight principles to fashion some questions and conversation invitations for our core team. The eight principles are:

  1. Developmental purpose
  2. Evaluation rigor
  3. Utilization focus
  4. Innovation niche
  5. Complexity perspective
  6. Systems thinking
  7. Co-creation
  8. Timely feedback

The first four of these are critical and the second four are kind of corollaries to the first and the first two are essential.

I think in the Art of Hosting and Art of Harvesting communities we get the first principle quite well, that participatory initiatives are, by their nature, developmental. They evolve and change and engage emergence. What I don’t see a lot of however is good rigour around the harvesting and evaluation.

All conversations produce data. Hosts and harvesters make decisions and choices about the kind of data to take away from hosted conversations. Worse, we sometimes DON’T make those decisions and then we end up with a mess, and nothing useful or reliable as a result of our work.

I was remembering a poorly facilitated session I once saw where the facilitator asked for brainstormed approaches to a problem. He wrote them in a list on a flip chart. When there were no more ideas, he started at the top and asked people to develop a plan for each one.

The problems with this approach are obvious.  Not al ideas are equal, not all are practical. “Solve homlessness” is not on the same scale as “provide clothing bundles.”  No one would seriously believe that this is an effective way to make a plan or address an issue.

You have to ask why things matter. When you are collecting data, why are you collecting that data and how are you collecting it? What is it being used for? Is it a reliable data source? What is your theoretical basis for choosing to work with this data versus other kinds of data?

I find that we do not do that enough in the art of hosting community. Harvesting is given very little thought other than “what am I going to do with all these flipcharts?” at which point it is too late.  Evaluation (and harvesting) rigour is a design consideration. If you are not rigourous in your data collection and your harvesting methods, others can quite rightly challenge your conclusions. If you cannot show that the data you have collected is coherent with a strategic approach to the problem you are addressing, you shouldn’t be surprised if your initiative sputters.

In my meta-framework the simple questions I am using are:

  • What are our data collection methods?
  • What is the theoretical basis and coherence for them?

That is enough to begin the conversation. Answering these has a major impact on what we are hosting.

I high recommend Quinn Patton et. al.’s book Developmental Evaluation Exemplars for a grounded set of principles and some cases.  Get rigourous.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Privilege, beauty and evaluation

April 20, 2017 By Chris Corrigan Evaluation One Comment

I’ve been for a beautiful walk this morning in the warm mist of a spring day in the highlands near Victoria. It was quiet but for the cacophony of bird song, and everything was wet with mist and dew. This is the greenest time of year on the west coast, and the mossy outcroppings and forest floor were verdant.

There is a beauty in what is, in any given moment.

I’ve been thinking about this as I have been struggling with watching people be evaluated in their work recently.  My daughter is a jazz musician, training her art in a university program where she is judged on her performance and where that number assigned to that moment in time affects much in her life.  My son laid out the papers he has been graded on, showing me a variety of marks that surprised him and made him proud of what he had accomplished. All of it a shallow judgement applied to a limited action in a tiny slice of time. Do these numbers take into consideration my daughter’s love of jazz or my son’s pride in the story he wrote or his ability to solve quadratic equations? Do they take into account how my kids approached this test, what it meant to them, what they were trying to do? How do these numbers track their changes, their growth, the affect that they are having on the world around them?

The evaluator’s job comes with enormous privilege.  The privilege is in determining the frame within which the noticing takes place. Poorly done evaluation happens when an evaluator reduces a complex outcome like “impact” into a few arbitrary indicators developed in isolation with a poorly articulated rationale and coherence with what is happening. When an evaluator walks into a process it is amazing how much gravity also enters the work.

At some point in our culture – and maybe it was always thus – evaluation became something of an investigation used to justify accountability pursued with a particular agenda in mind. Frameworks became both too narrow and too fuzzy. I have been in processes where evaluators wanted a single number on a scale from 1-5 to rate the effectiveness of an experience. And I have been in processes where evaluators are seeking to measure “impact” without every defining it, or only defining it on how a process has advanced their client’s singular needs and not the need of the whole ecosystem. I have never seen an evaluation that says to a client “these people are discovering some stuff that has nothing to do with what you funded them for, and therefore your assumptions about change are wrong.”

Done well however, evaluation contributes a tremendous amount of knowledge, awareness and confidence to a process. It allows us to make sense of our work, it opens our eyes to different questions we should be asking and it can put the tools of meaning making in the hands of people doing work. In complex environments, it can give us a new set of senses that help us see and hear and feel what is happening, and that open up promising new directions to nudge an effort.

When evaluation is part of the work it makes a huge difference. When evaluation is a separate project, laid on top of the work or done at a distance, it can bring the work to a standstill as everyone organizes around what the evaluator is looking for instead of where the project is at in its evolution or what the needs are.

Evaluations conducted with principles such as these ones are amazingly useful and empowering. They are deeply powerful influencers in the life of a project, and they need to be done with intense awareness of this power. We need to demand from our clients and funders and stakeholders, a more sophisticated standard of engagement around evaluation, and we need to hold evaluators to these principles too.

 

There is tremendous beauty in the moments of people working together, learning, creating, trying to improve the lives of others. Some days are rich with green and lush life and others are despairing failures. I would love to read an evaluation report that is as rich as Thoreau’s observations of life at Walden capturing the changes and the beauty, witnessing the growth all around, understanding its meaning and being open to the surprises that come with being immersed in an experience.

I’ll be writing more about this topic in the next little while. What are your longings for or experiences of great evaluation?

 

 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

1 … 5 6 7 8 9 … 11

Find Interesting Things
Events
  • Art of Hosting November 12-14, 2025, with Caitlin Frost, Kelly Poirier and Kris Archie Vancouver, Canada
  • The Art of Hosting and Reimagining Education, October 16-19, Elgin Ontario Canada, with Jenn Williams, Cédric Jamet and Troy Maracle
Resources
  • A list of books in my library
  • Facilitation Resources
  • Open Space Resources
  • Planning an Open Space Technology meeting
SIGN UP

Enter your email address to subscribe to this blog and receive notifications of new posts by email.
  

Find Interesting Things

© 2015 Chris Corrigan. All rights reserved. | Site by Square Wave Studio

%d