Chris Corrigan Chris Corrigan Menu
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me
  • Blog
  • Chaordic design
  • Resources for Facilitators
    • Facilitation Resources
    • Books, Papers, Interviews, and Videos
    • Books in my library
    • Open Space Resources
      • Planning an Open Space Technology Meeting
  • Courses
  • About Me
    • Services
      • What I do
      • How I work with you
    • CV and Client list
    • Music
    • Who I am
  • Contact me

Category Archives "Complexity"

From the feed

December 9, 2018 By Chris Corrigan Art of Harvesting, Art of Hosting, Collaboration, Complexity, Evaluation, Links, Philanthropy

Some interesting links that caught my eye this week.

Why Black Hole Interiors Grow (Almost) Forever

Leonard Susskind has linked the growth of black holes to increasing complexity. Is it true that the world is becoming more complex?

“It’s not only black hole interiors that grow with time. The space of cosmology grows with time,” he said. “I think it’s a very, very interesting question whether the cosmological growth of space is connected to the growth of some kind of complexity. And whether the cosmic clock, the evolution of the universe, is connected with the evolution of complexity. There, I don’t know the answer.”

With a Green New Deal, here’s what the world could look like for the next generation

This is the vision I have been asking for from our governments.  This vision is the one that would get me on board with using our existing oil and gas resources to manufacture and fund and infrastructure to accelerate this future for my kids. The cost of increasing fossil fuel use is so high, it needs to be accompanied by a commitment to faster transition to this kind of world. Read the whole thing.

Why we suck at ‘solving wicked problems”

Sonja Blignault is one of the people in the world with whom I share the greatest overlap of theory and practice curiosities regarding complexity. I know this, because whenever she posts something on her blog I almost always find myself wishing I had written that!  Here’s a great post of five things we can do to disrupt thinking about problem solving to enable us to work much better with complexity.

Money and technology are hugely valuable resources:  they are certaintly necessary but they are not sufficient.  Simply throwing more money and/or more advanced technology at a problem will not make it go away.  We need to fundamentally change our thinking paradigm and approach things in context-appropriate ways, otherwise we will never move the needle on these so-called wicked problems.

rock/paper/scissors and beyond

I miss Bernie DeKoven. Since he died earlier this year I’ve missed seeing his poetic and playful blog posts about games and fun.  Here is one from his archives about variations on rock/paper/scissors

The relationship between the two players is both playful and intimate. The contest is both strategic and arbitrary. There are rumors that some strategies actually work. Unless, of course, the players know what those strategies are. Sometimes, choosing a symbol at random, without logic or forethought, is strategically brilliant. Other times, it’s just plain silly.

So they play, nevertheless. Believing whatever it is that they want or need to believe about the efficacy of their strategies, knowing that there is no way to know.

The longer they play together, the more mystical the game becomes.

They play between mind and mindlessness. For the duration of the game, they occupy both worlds. The fun may not feel special, certainly not mystical. But the reality they are sharing is most definitely something that can only be found in play.

How Evaluation Supports Systems Change

An unassuming little article that outlines five key practices that could be the basis of a five-day deep dive into complexity and evaluation. I found this article earlier in the year, and notice that my own practice and attention has come back to these five points over and over.

While evaluation is often conducted as a means to learn about the progress or impact of an initiative, evaluative thinking and continuous learning can be particularly important when working on complex issues in a constantly evolving system. And, when evaluation goes hand in hand with strategy, it helps organizations challenge their assumptions, gather information on the progress, effects, and influence of their work, and see new opportunities for adaptation and change. 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Towards the idea that complexity IS a theory of change

November 7, 2018 By Chris Corrigan Complexity, Design, Emergence, Evaluation, Featured, Learning 20 Comments

In the world of non-profits, social change, and philanthropy it seems essential that change agents provide funders with a theory of change.  This is nominally a way for funders to see how an organization intends to make change in their work.  Often on application forms, funders provide guidance, asking that a grantee provide an articulation of their theory of change and a logic model to show how, step by step, their program will help transform something, address an issue or solve a problem.

In my experience, most of the time “theory of change” is really just another word for “strategic plan” in which an end point is specified, and steps are articulated backwards from that end point, with outcomes identified along the way.  Here’s an example. While that is helpful for situations in which you have a high degree of control and influence, and in which the nature of the problem is well ordered and predictable, these are not useful with complex emergent problems.  Most importantly they are not theories of change, but descriptors of activities.

For me a theory of change is critical. Looking at the problem you are facing, ask yourself how do these kinds of problems change? If, for example, we are trying to work on a specific change to an education policy, the theory of change needs to be based on the reality of how policy change actually happens. For example, to change policy you need to be influential enough with the government in power to be able to design and enact your desired changes with politicians and policy makers. How does policy change? Through lobbying, a groundswell of support, pressure during elections, participation in consultation processes and so on. From there you can design a campaign – a strategic plan – to see if you can get the policy changed.  

Complex problems are a different beast altogether. They are non-linear, unpredictable and emergent. Traffic safety is an example. A theory of change for these kind of problems looks much more like the dynamics of flocking behaviour. The problem changes through many many small interactions and butterfly effects. A road safety program might work for a while until new factors come into play, such as distractions or raised speed limits, or increased use of particular sections of road.  Suddenly the problem changes in a complex and adaptive way.  It is not logical or rational and one certainly can’t predict the outcome of actions.

In my perfect world I wish it would be perfectly acceptable for grantees to say that “Our theory of change is complexity.”  Complexity, to quote Michael Quinn Patton, IS a theory of change.  Understanding that reality has radical implications for doing change work. This is why I am so passionate about teaching complexity to organizations and especially to funders. If funders believe that all problems can be solved with predictive planning and a logic model adhered to with accountability structures, then they will constrain grantees in ways that prevent grantees from actually addressing the nature of complex phenomena. Working with foundations to change their grant forms is hugely rewarding, but it needs to be supported with change theory literacy at the more powerful levels of the organization and with those who are making granting decisions.

So what does it look like?

I’m trying these days to be very practical in describing how to address complex problems in the world of social change. For me it comes down to these basic activities:

Describe the current state of the system. This is a process of describing what is happening. It can be through a combination of looking at data, conducting narrative research and indeed, sitting in groups full of diversity and different lived experience and talking about what’s going on. If we are looking at road safety we could say “there are 70 accidents here this year” or “I don’t feel safe crossing the road at this intersection.” Collecting data about the current state of things is essential, because no change initiative starts from scratch.

Ask what patterns are occurring the system. Gathering scads of data will reveal patterns that are repeating and reoccurring in the system,  Being able to name these patterns is essential. It often looks as simple as “hey, do you notice that there are way more accidents at night concentrated on this stretch of road?” Pattern logic, a process used in the Human Systems Dynamics community, is one way that we make sense of what is happening. It is an essential step because in complexity we cannot simply solve problems but instead we seek to shift patterns.

Ask yourself what might be holding these patterns in place. Recently I have been doing this by asking groups to look at the patterns they have identified and answer this question. “If this pattern was the result of set of principles and advice that we have been following, what would those principles be?” This helps you to see the structures that keep problems in place, and that is an essential intelligence for strategic change work. This is one adaptation of part of the process called TRIZ which seeks to uncover principles and patterns. So in our road safety example we might say, “make sure you drive too fast in the evening on this stretch of road” is a principle that, if followed, would increase danger at this intersection. Ask what principles would give you the behaviours that you are seeing? You are trying to find principles that are hypotheses, things you can test and learn more about. Those principles are what you are aiming to change, to therefore shift behaviour.  A key piece of complexity as a theory of change is that constraints influence behaviour. These are sometimes called “simple rules” but I’m going to refer to them as principles, because it will later dovetail better with a particular evaluation method. 

Determine a direction of travel towards “better.”  As opposed to starting with an end point in sight, in complexity you get to determine which direction you want to head towards, and you get to do it with others. “Better” is a set of choices you get to make, and they can be socially constructed and socially contested. “Better” is not inevitable and it cannot be predictive but choosing an indicator like “fewer accidents everywhere and a feeling of safety amongst pedestrians” will help guide your decisions.  In a road safety initiative this will direct you towards a monitoring strategy and towards context specific actions for certain places that are more unsafe than others. Note that “eliminating accidents” isn’t possible, because the work you are trying to do is dynamic and adaptive, and changes over time. The only way to eliminate accidents is to ban cars. That may be one strategy, and in certain places that might be how you do it.  It will of course generate other problems, and you have to be aware and monitor for those as well.  In this work we are looking for what is called an “adjacent possible” state for the system.  What can we possibly change to take us towards a better state? What is the system inclined to do?  Banning cars might not be that adjacent possible.

Choose principles that will help guide you away from the current state towards “better.” It’s a key piece of complexity as a theory of change that constraints in a system cause emergent actions. One of my favourite writers on constraints is Mark O Sullivan, a soccer coach with AIK in Sweden. He pioneers and research constraint based learning for children at the AIK academy. Rather than teach children strategy, he creates the conditions so that they can discover it for themselves. He gives children simple rules to follow in constrained game simulated situations and lets them explore and experiment with solutions to problems in a dynamic context. In this presentation he shows a video of kids practicing simple rules like “move away from the ball” and “pass” and watches as they discover ways to create and use space, which is an essential tactical skill for players, but which cannot be taught abstractly and which must be learned in application.  Principles aimed at changing the constraints will help design interventions to shift patterns.

Design actions aimed at shifting constraints and monitor them closely. Using these simple rules (principles) and a direction of travel, you can begin to design and try actions that give you a sense of what works and what doesn’t.  These are called safe to fail probes. In the road safety example, probes might include placing temporary speed bumps on the road, installing reflective tape or silhouettes on posts at pedestrian crossings, placing a large object on the road to constrain the driving lanes and cause drivers to slow down. All of these probes will give you information about how to shift the patterns in the system, and some might produce results that will inspire you to make them more permanent. But in addition to monitoring for success, you have to also monitor for emergent side effects.  Slowing traffic down might increase delays for drivers, meaning that they drive with more frustration, meaning more fender benders elsewhere in the system. Complex adaptive systems produce emergent outcomes. You have to watch for them. 

Evaluate the effectiveness of your principles in changing the constraints in the system. Evaluation in complex systems is about monitoring and watching what develops as you work. It is not about measuring the results of your work, doing a gap analysis and making recommendations. There are many, many approaches to evaluation, and you have to be smart in using the methods that work for the nature of the problem you are facing. In my opinion we all need become much more literate in evaluation theory, because done poorly, evaluation can have the effect of constraining change work into a few easily observed outcomes. One form of evaluation that is getting my attention is principles-based evaluation, which helps you to look at the effectiveness of the principles you are using to guide action. This is why using principles as a framework helps to plan, act and evaluate.

Monitor and repeat. Working on complex problems has no end. A traffic safety initiative will change over time due to factors well outside the control of an organization to respond to it. And so there never can be an end point to the work. Strategies will have an effect and then you need to look at the current state again and repeat the process.  Embedding this cycle in daily practice is actually good capacity building and teams and organizations that can do this become more responsive and strategic over time. 

Complexity IS indeed a theory of change. I feel like I’m on a mission to help organizations, social change workers and funders get a sense of how and why adopting to that reality is beneficial all round.  

How are you working with complexity as a theory of change?

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Working with principles: some thoughts and an exercise

October 15, 2018 By Chris Corrigan Chaordic design, Complexity, Design, Featured 2 Comments

I’m continuing to refine my understanding of the role and usefulness of principles in evaluation, strategy and complex project design.  Last week in Montreal with Bronagh Gallagher, we taught a bit about principles-based evaluation as part of our course on working with complexity. Here are some reflections and an exercise.

First off, it’s important to start with the premise that in working in complexity we are not solving problems, but shifting patterns. Patterns are the emergent results of repeated interactions between actors around attractors and within boundaries. To make change in a complex system therefore, we are looking to shift interactions between people and parts of a system to create a beneficial shift in the emergent patterns.

To do this we have to have a sufficient understanding of the current state of things so that we can see patterns, a sense of need for shifting patterns and an agreed upon preferred and beneficial direction of travel. That is the initial strategic work in any complex change intervention. From there we create activities that help us to probe the system and see what will happen, which way it will go and whether we can do something that will take it in the beneficial direction. We then continue this cycle of planning, action and evaluation.

Strategic work in complexity involves understanding this basic set of premises, and here the Cynefin framework is quite useful for distinguishing between work that is best served by linear predictive planning – where a chain of linked events results in a predictable outcome – and work that is best served by complexity tools including pattern finding, collective sensemaking and collaborative action.

Working with principles is a key part of this, because principles (whether explicit of implicit) are what guide patterns of action and give them the quality of a gravity well, out of which alternative courses of action are very difficult.. Now I fully realize that there is a semantic issue here, around using the term “principles” and that in some of the complexity literature we use, the terms “simple rules” or “heuristics” are also used. Here I am using “principles” specifically to tie this to Michael Quinn Patton’s principles-based evaluation work, which i find helpful in linking the three areas of planning, action, and evaluation.  He defines an effectiveness principle as something that exhibits the following criteria:

  1. Guides directionality
  2. Is useful and usable
  3. Provides inspiration for action
  4. Is developmental in nature and allows for the development of approaches (in other words not a tight constraint that restricts creativity)
  5. Is evaluable, in that you can know whether you are doing it or not.

These five qualities are what he calls “GUIDE,” an acronym made from the key criteria. Quinn Patton argues that if you create these kinds of principles, you can assess their effectiveness in creating new patterns of behaviour or response to a systemic challenge.  That is helpful in strategic complexity work.  

To investigate this, we did a small exercise, which I’m refining as we go here.  On our first day we did a sensemaking cafe to look at patterns of where people in our workshop felt “stuck” in their work with clients and community organizations.  Examples of repeating patterns included confronting aversion to change, use of power to disenfranchise community members, lack of adequate resources, and several others. I asked people to pick one of these patterns and asked them to create a principle using the GUIDE criteria that seems to be at play to keep this pattern in place.  

For example, on aversion to change, one such principle might be “Create processes that link people’s performances to maintaining the status quo.” You can see that there are many things that could be generated from such a principle, and that perhaps an emergent outcome of such a principle might be “aversion to change.” This is not a diagnostic exercise. Rather it helped people understand the role that principles have in containing action with attractors and boundaries. In most cases, people were not working with situations where “aversion to change” was a deliberate outcome of their strategic work, and yet there was the pattern nonetheless, clear and obvious even within settings in which innovation or creativity is supposedly prized and encouraged.

Next I invited people to identify a direction of travel away from this particular pattern

using a reflection on values. If aversion to change represents a pattern you negatively value, what is an alternative pattern, and what is the value beneath that? It’s hard to identify values, but these are pretty pithy statements about what matters. One value might be “Curiosity about possibility” and another might be “excitement for change.” From there participants were asked to write a principle that might guide action towards the emergence of that new pattern.  One such example might be “Create processes that generate and reward small scale failure.”  I even had them take that one statement and reduce it to a simple rule, such as “Reward failure, doubt

The next step is to put these principles in play within an organization to create tests to see how effective the principle is.  If you discover that it works, refine it and do more. If it doesn’t, or if it creates another poor pattern such as cynicism, stop using it and start over.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

The limits of certainty

September 28, 2018 By Chris Corrigan Complexity, Evaluation, Featured

An interesting review essay by John Quiggan looks at a new book by Ellen Broad called Made by Humans: The Ai Condition. Quiggan is intrigued by Broad’s documentation of the way algorithms have changed over the years, from originating as “a well-defined formal procedure for deriving a verifiable solution to a mathematical problem” to becoming a formula for predicting unknown and unknowable futures.  Math problems that benefit from algorithms fall firmly in the Ordered domains of Cynefin. But the problems that AI is now be deployed upon are complex and emergent in nature, and therefore instead of producing certainty and replicability, AI is being asked to provide probabilistic forecasts of the future.

For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.


As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.


Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed. A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.


Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.
A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.


Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.


The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.


Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.

In general I think that scientists understand the limits of this approach to modelling, and that was borne out in several discussions that I had with ecologists last week in Quebec. We do have to define what we mean by “prediction” though. Potential futures can be predicated with some probability if you understand the nature of the system, but exact outcomes cannot be predicted. However, we (by whom I mean the electorate and policy makers who work to make single decisions out of forecasts) do tend to venerate predictive technologies because we cling to the original definition of an algorithm, and we can come to believe that the model’s robustness is enough to guarantee the accuracy of a prediction.  We end up trusting forecasts without understanding probability, and when things don’t go according to plan, we blame the forecasters rather than our own complexity illiteracy. 

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

Selecting weak signals and building in diversity and equity

June 14, 2018 By Chris Corrigan Art of Harvesting, Complexity, Emergence, Facilitation, Featured 3 Comments

When working in complexity, and when trying to create new approaches to things, it’s important to pay attention to ideas that lie outside of the known ways of doing things.  These are sometimes called “weak signals” and by their very nature they are hard to hear and see.

At the Participatory Narrative Inquiry Institute, they have been thinking about this stuff.  On May 31, Cynthia Kurtz posted a useful blog post on how we choose what to pay attention to:

If you think of all the famous detectives you know of, fictional or real, they are always distinguished by their ability to hone in on signals — that is, to choose signals to pay attention to — based on their deep understanding of what they are listening for and why. That’s also why we use the symbol of a magnifying glass for a detective: it draws our gaze to some things by excluding other things. Knowing where to point the glass, and where not to point it, is the mark of a good detective.

In other words, a signal does not arise out of noise because it is louder than the noise. A signal arises out of noise because it matters. And we can only decide what matters if we understand our purpose.

That is helpful. In complexity, purpose and a sense of direction helps us to choose courses of action from making sense of the data we are seeing to acting on it.

By necessity that creates a narrowing of focus and so paying attention to how weak signals work is alos important. Yesterday the PNI Institute discussed this on a call which resulted in a nice set of observations about the people seeking weka signals an dthe nature of the signals themselves:

We thought of five ways that have to do with the observer of the signal:

  1. Ignorance – We don’t know what to look for. (Example: the detective knows more about wear patterns on boots than anyone else.)
  2. Blindness [sic]- We don’t look past what we assume to be true. (No example needed!)
  3. Disinterest – We don’t care enough about what we’re seeing to look further. (Example: parents understand their toddlers, nobody else does.)
  4. Habituation – We stopped looking a long time ago because nothing ever seems to change. (Example: A sign changes on a road, nobody notices it for weeks.)
  5. Unwillingness – It’s too much effort to look, so we don’t. (Example: The “looking for your keys under the street light” story is one of these.)

And we listed five ways a signal can be weak that have to do with the system in which the observer is embedded:

  1. Rare – It just doesn’t happen often.
  2. Novel – It’s so new that nobody has noticed it yet.
  3. Overshadowed – It does happen, but something else happens so much more that we notice that instead.
  4. Taboo – Nobody talks about it.
  5. Powerless – Sometimes a signal is literally weak, as in, those who are trying to transmit it have no power.

You can see that this has important implications for building in equity and diversity into sense-making processes. People with different lived experiences, ways of knowing and ways of seeing will pay attention to signals differently. If you are trying to build a group with the increased capacity to scan and make sense of a complex problem, having cognitive and experiential diversity will help you to find many new ideas that re useful in addressing complex problems.  Furthermore, you need to pay attention to people whose voices are traditionally quieted in a group so as to amplify their perspectives on powerless signals.

Share:

  • Click to share on Mastodon (Opens in new window) Mastodon
  • Click to share on Bluesky (Opens in new window) Bluesky
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to email a link to a friend (Opens in new window) Email
  • Click to print (Opens in new window) Print
  • More
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Pinterest (Opens in new window) Pinterest
  • Click to share on Pocket (Opens in new window) Pocket
  • Click to share on Telegram (Opens in new window) Telegram

Like this:

Like Loading...

1 … 19 20 21 22 23 … 31

Find Interesting Things
Events
  • Art of Hosting November 12-14, 2025, with Caitlin Frost, Kelly Poirier and Kris Archie Vancouver, Canada
  • The Art of Hosting and Reimagining Education, October 16-19, Elgin Ontario Canada, with Jenn Williams, Cédric Jamet and Troy Maracle
Resources
  • A list of books in my library
  • Facilitation Resources
  • Open Space Resources
  • Planning an Open Space Technology meeting
SIGN UP

Enter your email address to subscribe to this blog and receive notifications of new posts by email.
  

Find Interesting Things

© 2015 Chris Corrigan. All rights reserved. | Site by Square Wave Studio

%d