Open Thinkering

Why some organisations learn to be less wrong (and others don't)

Derelict building surrounded by rising water
Photo by Julius Jansson / Unsplash

This is the third post in a series about mental models in a polycrisis. My first post argued that we should treat our beliefs as working hypotheses. Then the second made the case for building a personal thinking system that supports revision. This post takes things a step further, moving from individual cognition to the shared mental models embedded in organisations and institutions.


In the second post in this series, I made the case that the tools you use are part of how you think. A well-designed “thinking system” therefore supports the habit of updating your beliefs. The argument I made was about us as individuals, but we don't work in a vacuum. Individual thinking usually happens within organisations and institutions, and these structures also carry their own mental models.

Organisational policies, meeting structures, incentive systems, and hiring criteria are all examples of collective “cognitive scaffolding.” This scaffolding encodes assumptions about what matters, who knows best, and how decisions get made. Last time, I talked about how a badly designed personal tool stack can amplify bias, but a badly designed institution can amplify outdated assumptions across hundreds or thousands of people.

Many of our institutions resemble buildings designed for a different shoreline.

In their 1994 paper Shared Mental Models: Ideologies and Institutions, economists Arthur Denzau and Douglass North argue that it is the relationships between mental models that we need to study in order to understand how we can work together in situations of uncertainty:

In order to understand decision making under such conditions of uncertainty we must understand the relationship between the mental models that individuals construct to make sense out of the world around them, the ideologies that evolve from such constructions, and the institutions that develop in a society to order interpersonal relationships.

Shared mental models work well under stable conditions. They enable people to coordinate without having to renegotiate everything from scratch each morning. The problem comes, however, when the environment or conditions shift, because the theories encoded in our organisations and institutions were not written for times of polycrisis.

Institutional inertia

In Mental models and institutional inertia, Eckehard Rosenbaum, states his belief that institutional inertia is “Janus-faced.” By this he means that it is “both a precondition for institutions to function, and a factor which prevents or slows down attempts at achieving institutional change.” In a polycrisis, the cost of that “slowdown” in institutional change is amplified.

In other words, the very things that make institutions work such as shared assumptions, stable rules, and predictable processes, are the same things that stop them adapting. For example, as a former teacher, it's impossible to run a school if every teacher disagrees about what a lesson should look like. But a school's shared understanding of “what a lesson looks like” also makes it extraordinarily hard to rethink pedagogy when the world changes.

Rosenbaum’s argument is that mental models are not mere descriptions of institutions. Instead, mental models are the “very bearers” of them: institutions exist in the mental models of the people who reproduce them through their daily actions. Changing the models means changing the institution, but mental models are reinforced every time the institution works as expected. It’s a self-stabilising feedback loop.

The “iron cage”

Just as individuals don't exist in isolation, nor do organisations and institutions. They are organised in fields: clusters of organisations that compete, fund, regulate, and imitate one another.

In a 1983 paper entitled The Iron Cage Revisited, Paul DiMaggio and Walter Powell make a simple observation:

Rational actors make their organizations increasingly similar as they try to change them.

There are three mechanisms that DiMaggio and Powell have identified that push organisations towards homogeneity. They prefer the mathematical term “isomorphism” which, in practice, simply means becoming more alike in structure and behaviour.

So what are these isomorphic tendencies?

  • Coercive isomorphism: Regulators, funders, and legal requirements force conformity. For example, if Ofsted inspects schools against a particular framework, then schools organise themselves to satisfy that framework. This is independent of whether or not it produces a good educational outcome for young people.
  • Mimetic isomorphism: When uncertain, organisations copy whatever appears to work elsewhere. DiMaggio and Powell note that “as an innovation spreads, a threshold is reached beyond which adoption provides legitimacy rather than improves performance.” As a result, everyone adopts the same strategy not because it’s been shown to work, but because not adopting it feels risky.
  • Normative isomorphism: Teachers are usually those who have done relatively well in the existing system. As such, their training and qualifications lead to the spreading of a shared worldview across an entire sector. This means that they see the same problems and reach for the same solutions.

The result of these three types of isomorphism is that structural change becomes less driven by competition or efficiency, but by bureaucracy and other forms of organisational change. It's the institutional equivalent of what I described in my first post as confirmation bias, but operating at a field-wide scale.

Organisations don’t just resist change individually; they resist it collectively. A school might want to rethink assessment, but still exists within a system of league tables and inspection frameworks. Likewise, a charity that wants to rethink its service model still have stakeholders who will measure them by the old model’s yardstick.

So, what makes the difference?

From what I've discussed so far about internal inertia and external isomorphism, you'd be forgiven for wondering whether organisations can ever update their shared mental models.

It is possible. From what I've seen, read, and experienced, I think the reason some organisations can update their mental models is because of five structural characteristics.

1. They treat objections as information

In my second post I mentioned that We Are Open Co-op uses Sociocracy, a form of consent-based decision making. With this, proposals go ahead not because everyone agrees it’s the best option. Instead, they go ahead, because they're “good enough for now, and safe enough to try.” In effect, nobody has a reasoned objection that it would cause harm.

Objections aren’t a form of dissent to be overruled, but signals that the proposal might not be yet “safe enough to try.” This is the organisational equivalent of the epistemic humility I described in the first post. It builds into the governance structure a mechanism for surfacing “we might be wrong about this” without requiring the person raising the concern to win a political battle.

Contrast this with hierarchical or majority-vote governance, where minority concerns get outvoted and the organisation loses access to information that might have prevented a costly mistake. The governance structure itself determines how much of the available thinking the organisation can actually use.

2. They have short feedback loops giving a licence to experiment

It sounds obvious, but it's true: organisations that run small, time-limited experiments and learn from them tend to update their mental models faster than those that only make large, irreversible decisions. Implementing agile ways of working, piloting policies, and encouraging experimentation all shorten the gap between action and feedback.

In my first post, I argued there for treating beliefs as working hypotheses that have what William James called “cash value.” Organisations that treat policies as working hypotheses are doing the same thing on a collective basis. They ask questions such as: What would we expect to see if this policy is working? When will we check? What would make us stop?

Sadly, many organisations and institutions instead do the opposite. They make large, expensive commitments (e.g. new IT systems, restructures, 3-year strategies) and then measure success on criteria that defined before the work began. The feedback loop between action and learning is so long that by the time the evidence arrives, the people who made the decision have moved on. The sunk cost fallacy kicks in, and admitting the approach isn’t working feels like an existential threat.

3. They celebrate a diversity of perspectives

Instead of diversity as a tick-box exercise, organisations that can change their mental models have structural mechanisms bringing different viewpoints into contact during decision-making. This might look like cross-functional teams, rotating roles, external advisory panels, or even something as simple as a “devil's advocate” or pre-mortem exercise.

As DiMaggio and Powell pointed out, normative isomorphism means that homogeneous professional backgrounds produce homogeneous mental models. Gender and race aside, if everyone in the room has the same training, they’ll have the same blind spots. Structural diversity is a countermeasure to this, increasing the range of mental models available to the organisation when it needs to make sense of something new.

Karl Weick’s work on sensemaking is also relevant here. He showed that loosely coupled organisations are often better at “local adaptation” than tightly coupled ones. This is because, in a loosely coupled system, different parts of the organisation can respond to different signals without waiting for permission from the centre.

4. They tolerate ambiguity

Weick’s research on sensemaking in crises showed that leaders who can sit in ambiguity (my phrase) long enough tend to enable better collective decisions than those who rush to a single narrative. He defined sensemaking as “the ongoing retrospective development of plausible images that rationalise what people are doing.” The key word here is plausible, rather than correct. Sensemaking is about arriving at stories that are “good enough” to act on, not about arriving at The Truth.

In my consultancy work, I come across organisations whose culture rewards decisive-sounding pronouncements (“we have a clear strategy”). They tend to lock in mental models prematurely, as opposed to organisations whose culture tolerates ambiguity (“we’re not sure yet, but here’s what we’re testing”).

As Weick put it: “How can I know what I think until I see what I say?” The implication for organisations is that they need to say things (i.e. try things, make proposals, run experiments) in order to think (i.e. learn what works). They also need to be willing to revise what they’ve said in light of what they’ve learned.

5. They anchor their identity to purpose, not method

I find it curious that some organisations and institutions define themselves so tightly by a particular way of working. This means that changing their approach somehow feels like abandoning who they are. For example, a newspaper that defines itself as a print publication will struggle to become a digital one. A university that defines itself as a provider of three-year campus-based degrees will struggle to adapt to a world of microcredentials.

Although we like specificity, organisations with a more abstract sense of purpose (e.g. we help people make sense of what’s happening or we support lifelong learning) have more wiggle room to update how they do what they do. Their shared mental model is about the mission, rather than the machinery.

Returning again to Weick, his concept of loose coupling is also useful here. He observed that educational organisations are often loosely coupled: the counsellor’s office is “somehow attached” to the head’s office, “but each retains some identity and separateness.” This can look like dysfunction, but it’s also a source of resilience. Loosely coupled organisations have more room for adaptation, meaning more room for updating mental models at the edges.

Gaining leverage

The systems thinker Donella Meadows famously had twelve leverage points which I've discussed on this blog before. They give us a way to see why the five structural characteristics above matter so much more than the usual organisational change efforts.

Meadows was critical of most institutional reform projects, which she referred to as “diddling with the details, arranging the deck chairs on the Titanic.” She estimated that “probably 90, no 95, no 99 percent of our attention goes to parameters, but there’s not a lot of leverage in them.”

The five structural characteristics I’ve described sit much higher on her list:

FeatureMeadows leverage point
1. Treating objections as information#5: The rules of the system
2. Short feedback loops and a licence to experiment#6: The structure of information flows
3. Celebrating a diversity of perspectives#4: The power to self-organise
4. Tolerating ambiguity#2: The mindset or paradigm
5. Anchoring identity to purpose, not method#3: The goals of the system

The highest point on Meadows' list, above even paradigms, is “the power to transcend paradigms.” Meadows described this as keeping “oneself unattached in the arena of paradigms, to stay flexible, to realise that NO paradigm is ‘true,’ that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe.”

That sounds a lot like the epistemic humility I described in my first post, but scaled up to the level of an entire system. It also sounds like what an organisation is doing when it treats its own policies, structures, and assumptions as working hypotheses rather than settled truths.

Dropping your tools

I'll finish with a story Weick recounted in a paper called Drop Your Tools (1996). In separate wildfire disasters, firefighters were ordered to drop their heavy tools so they could run faster and escape a “blowup.” Many of them didn’t, and 27 people died, some within sight of safety, still carrying their equipment.

Weick’s understandable question was: why would someone keep hold of a tool that was about to kill them? His answers ranged from the practical (tools are part of a firefighter’s identity, so dropping them feels like stopping being a firefighter) to the structural (the order to drop tools came from someone whose authority was ambiguous).

The allegory extends beyond firefighting, with Weick observing that “social scientists refuse to drop their paradigms, parables, and propositions when their own personal survival is threatened.” Organisations do likewise.

The “tools” in question might be a strategy, a metric, a governance structure, or a definition of what the organisation is. In times of polycrisis, the conditions that made those tools useful may no longer hold. The question is whether the organisation can recognise that in time – and whether it has the structural capacity to let go.

Final words: tying these threads together

Across the three posts in this series, I've moved the argument from the individual to the organisation:

  1. Your beliefs are working hypotheses. So name them, test them, and revise them. (Post 1)
  2. Your tools are part of how you think. So design them to support revision, not just productivity. (Post 2)
  3. Your organisation’s structures encode shared mental models. So focus on structural characteristics that enable these to be updated more easily.

In times of polycrisis, organisations and institutions that can’t update their shared mental models keep applying yesterday’s assumptions to today’s problems. The ones that can update them, though, share structural characteristics that are worth studying, replicating, and (where possible) building into new organisations from the start.

And, finally, if you need some help with any or all of this, get in touch.