
My friend Holger Nauheimer is busy working on The Human-AI Facilitation Manifesto (LinkedIn link). Here is his most recent draft:
- Perception is plural. Humans sense emotions and atmosphere. Al sees patterns and structure. Together, they reveal deeper coherence.
- Meaning emerges in relationship. Al offers structure, but humans bring the stories that make sense of the structure.
- Belonging is human. Al can stabilize language – but trust grows only between people.
- Depth matters more than speed. Al adds value not by optimizing, but by making visible what is hard to say.
- Neutral clarity is a gift. Al can name tensions without judgment — offering safety without shying away from truth.
- Courage is shared. Humans bring vulnerability. Al brings steadiness. Together, they hold the uncomfortable without collapse.
- This is not a tool upgrade. It is a shift in attention. Hybrid facilitation expands what can be seen, said, and sensed.
- Clarity is not authority. Al can hold patterns, but humans must hold responsibility. Hybrid facilitation works best when projection is named and agency stays human.
Here are some thoughts I have on this, simple thoughts, thoughts off the top of my head. Starting points.
First of all, I’m not loving the “AI does this, humans do this” construction of this manifesto. I think we shouldn’t put humans and AI on the same footing. If we want a manifesto to talk about how AI can be an aid to facilitation and sensemaking, we should talk about what it can do, and what it currently cannot do. I think there is always a place for human beings to talk about facilitation and also what OUR role is in it, because honestly, some forms of what passes for facilitation (especially the wrong processes used in the wrong contexts) can be more damaging than just letting AI ask you a bunch of questions and leaving your group to talk about them.
So given that…thoughts on these points.
Perception is plural. I don’t think AI “perceives.” At least not the AI that most of us are using in 2025. It analyses, and uses algorithms and probability tables to auto complete thoughts. It can be trained to be agreeable or be contrarian or be a nazi or whatever. But it doesn’t “see”. It offers material that becomes one more part of the information load that humans take in. But how humans perceive AI output matters a great deal. Some might dismiss it. Some might give it a kind of divine appreciation. I’m already seeing lots of blog posts starting with “I asked ChatGPT, and this is what it said…” as if ChatGPT is somehow more perceptive, or smarter or has access to better facts than anyone in particular. Perception is something human beings do. We do it individually, and we do it together in groups. Computers don’t perceive. And computers don’t understand depth. See below.
Meaning emerges in relationship. Yes. 100% yes. AI offers structure the way a banana offers structure, or a photograph, or a stray feather. AI does not offer the kind of relational meaning making that humans experience together because it does not have the same cognition that humans do. Human beings can take any object and use it to craft a ritual and stimulate new thoughts and experiences. This can be very helpful, in that it can introduce oblique stimuli into an environment and help us find new thoughts and ideas through association, metaphor, interpretation, cultural norming or culture breaking. We use tools like Visual Explorer or poetry and art for this in group work, and AI is an excellent source of obliquity and ambiguity precisely because it is capable of NOT being in relationship. We are capable of actionable insight that triggers a particular process in our brains that not only makes meaning, but does something to the relationship and the relational field as a result. Builds community, friendship, love. Or hate, and despair and panic. AI isn’t doing that.
Belonging is human. Which follows from the above. AI has no role in belonging. A person belongs when they are claimed by others. if you find yourself being “claimed” by AI, be careful. You are being manipulated.
Depth matters more than speed. Sometimes. Sometimes not. It depends. To AI, everything is speed. Has anyone asked AI to take its time and let its thought process really deepen? To go for a walk and let its brain tense and relax in ways that open new pathways? Nope. AI delivers things fast. I’m not sure it is capable of what we mean by “depth.” We perceive depth as a vertical axis of meaning. We order thoughts and experiences by whether they are shallow or deep. It has nothing to do with speed. AI, I suspect, uses flat semantic structures. It is associative. It would not understand depth the way you understand depth, as perceiving something being more meaningful in this moment to you and your context than not. If you say the word “John” right now it might mean nothing to you. But that was my father’s name and as I type it I look up at the picture I have of him I drinking our last whiskey together, a dram of Ladaig 10 year old malt, chosen because it was the distillery closest to Iona where I finished a pilgrimage in 2018, and because we were talking that evening about spirituality and remembering the drams we shared together on our trip through Ireland in 2012. But to ChatGPT 5, what does “John” mean? ““John” feels like an everyman name. A placeholder for the ordinary person — anyone and no one in particular” (emphasis the robot’s, not mine). Oof.
Neutral clarity is a gift. It is very hard for a human being to offer neutral, clear feedback to another person. But AI will not spare your feelings. My favourite use of LLMs is to critique my writing and ideas, tell me where I am wrong, where others will disagree with me. Tell me where I am about to make a fool of myself.. This is a helpful function.
Courage is shared. I feel like relying on AI to give me courage is foolish. I feel like I need courage NOT to rely on it. For example, this blog post. I’m writing it and dashing it off so Holger and others can reflect on it, and so OI can thinking out loud on these issues. And I’m not going to give it to ChatGPT for feedback. I am noticing that THAT requires more courage than hiding behind something that might polish it up. If I was publishing in a journal, I’d want that (and a good editor). But right now I’m wanting to write a fully human post in my own voice, so YOU all can weigh in and tell me what YOU think too, without using your LLM to critique it.
This is not a tool upgrade. Indeed. It’s just another tool. Not THE tool. Not a phase shift in how we do facilitation. I have seen facilitators discover a new tool like Open Space Technology and evangelize the hell out of it, saying that it should be used everywhere all the time and in exactly the same way for everything. Humans can be very good at creating and using tools, but we have also evolved practices of apprenticeship and mentorship in using and then making tools. AI doesn’t replace that. We need good mentors to apprentice to as facilitators. And then we can think about how to use our tools well.
Clarity is not authority. I don’t think AI offers any special clarity, and I do not think it has a lock on seeing patterns. Humans are exceptional at spotting patterns. Our brains are possibly the most complex things we know of in the universe (although as Steven Wright once said, you have to think about who is telling you that!). We are built to spot patterns. And we are full of filters and biases and inattentional blindness. We are prone to enacted cognition. We are neurodiverse and cognitively gifted in different ways. And so working with others helps us spot patterns and validate useful ones. If AI is part of your pattern spotting family, so be it. Just realize that it lacks all the tools we have to make sense of patterns in complexity. It can only work with what it has got. Its processes of insight are reducible. Ours are not. They are emergent.
That’s me. What do you think?
