Open Thinkering

When AI remembers everything and organisations forget how to choose

Overlapping Polaroid photos
Photo by sarandy westfall / Unsplash

This is the third in a series of posts using stories from Jorge Luis Borges' collection Labyrinths to examine how AI is reshaping decision-making in mission-driven organisations. My first post explored how AI systems become inescapable in The (AI) lottery is already running. The second looked at how they can narrow our choices before we know we are choosing in When AI tools give you choices. This third post comes at things from a different angle: what happens when the tools give us everything we asked for – and it turns out that everything is... too much?


From from 1917 to 1983, architect, systems theorist, and inventor Buckminster Fuller kept a diary he called the Dymaxion Chronofile. It was an obsessive record of his life:

Fuller's Chronofile contains over 140,000 pieces of paper, as well as 64,000 feet of film, 1,500 hours of audio tape, and 300 hours of video recordings. The Chronofile is cross-referenced alphabetically using 13,500 5x8 inch index cards. Photos from Fuller's childhood from age four were added retrospectively.

It was his attempt to capture a life in its full detail, to make what we would call today a searchable “external memory.”

Most mission-driven organisations have something akin to a Dymaxion Chronofile: email archives, CRM systems, analytics dashboards, shared documents, and video conference recordings. These days, AI tools often augment these systems, ready to retrieve any fragment on demand and to summarise whatever they find. It can feel like warp-speed-like progress: nothing disappears and everything can be brought back into view.

We might think that perfect recall is an amazing gift or something to try and achieve. The experience of people with Hyperthymesia would suggest otherwise. It's a rare condition in which people to be able to remember an abnormally large number of their life experiences in exhaustive detail. It's important to note that they do not experience this as superpower, but as distracting, intrusive, and tiring.

It turns out that forgetting is how humans function. We don't only store information, but compress it, reinterpret it, and let things fade into the background so that we can act now, in this situation or context, and with this set of priorities.

The same is true for organisations. There's a difference between being able to recall everything and being able to decide what to do next. AI tools might promise to make our institutional memories searchable and analysable at scale, but the risk is that we confuse accessibility with usefulness.

The cost of remembering everything

In Borges' story Funes the Memorious, the protagonist falls from a horse and acquires a perfect memory. He remembers everything. Every leaf on every tree he has ever seen, individually and in every configuration of light. Every word of every conversation. Every sensation of every moment. His recall is total, instantaneous, and flawless.

It absolutely destroys him.

Why? Funes cannot generalise. He can't abstract. He cannot, in any meaningful sense, think. As Borges writes: “To think is to forget a difference, to generalise, to abstract. In the overly replete world of Funes there were nothing but details.” His mind becomes like a hard drive full of uncompressed movies, storing every pixel of every frame but never producing a summary.

The story is usually read as a parable about the limits of memory. But it's perhaps more usefully read as a parable about the limits of information. Funes does not lack data; he is drowning in it. In many ways, drowning in data is a worse problem than lacking it, because it disguises itself as competence.

When you can recall every fact, it is easy to mistake recall for insight, like a student who crams for the exam, but then has no idea what to do with that information in real life. ​

The limits of dashboards

As I explained in the second post in this series, there's a difference between choice (selecting from options presented to you) and agency (shaping which options exist in the first place).

Likewise, there is a useful distinction to be drawn between legibility and significance:

  • Legibility means that something is capturable as data. So, for example, website traffic, programme completion rates, or social media engagement levels. The kinds of things that can be counted, tracked, graphed, and benchmarked.
  • Significance means how important that data is in terms of actually mattering as to an organisation's mission. Sometimes legibility and significance overlap, but often they don't and are pulling in different directions.

The most significant things an organisation does, such as building communities, doing advocacy work to change how people think about an issue, or creating the conditions for systemic change, are frequently the least legible. They're difficult to quantify not because they're “vague” but because they're complex. It's difficult to put a single number on complexity.

For example, a youth charity could find it very easy to tell their board how many workshop sessions they delivered within a financial year and how many young people attended. But it's much harder to figure out the significance of that: Do those young people now feel more able to challenge injustice? Are they more likely to seek support when they need it?

We know that AI systems are biased towards the legible. They're very good at identifying patterns in data that exists, but blind to things that haven't be measured – or couldn't be measured. So the danger is that sophisticated AI analysis starts pulling organisations even more towards optimising for what is measurable rather than what is meaningful.

Look, nobody sits down with the aim to replace significance with legibility. The shift looks and feels more like erosion. The tools make legibility easy and satisfying, so significance ends up getting crowded out.

Over time, programmes with tidy metrics start to look more “effective” than programmes with messy but profound outcomes. If they're not careful, mission-driven organisations might find a new CEO comes in and starts moving resources away from programme producing transformative but hard-to-measure results, and towards a programme that produces modest but “dashboardable” outputs.

AI tools can make it all too easy to prioritise legibility over significance.

Identical, but not the same

Borges wrote another story that illuminates this from a different angle. In Pierre Menard, Author of the Quixote, a 20th-century writer produces a text that is word-for-word identical to Cervantes’ Don Quixote. He achieve this not by copying it, but by arriving at the same words through an entirely different process. Borges argues that the two identical texts mean completely different things because of the different contexts in which they were produced.

I would agree.

Context matters when it comes to AI-generated analysis. A recommendation emerging from deep organisational knowledge, contextual judgement, and hard-won experience may indeed look the same as a recommendation generated by an AI model processing the same data. The words in the recommendation could even be identical – e.g. “Focus on community engagement and donor stewardship to enhance relationships and increase contributions.”

But they're not the same recommendation. The process matters – not because process is in and of itself “sacred” – but because the capacity to generate the recommendation is a form of organisational intelligence. This capacity contains tacit knowledge, relationships, institutional memory, and a feel for risk that does not fit neatly into a prompt. Outsourcing that capacity means that the organisation becomes dependent on a tool that might be able to produce the output but can't reproduce the understanding that once lay behind it.

The case for strategic forgetting

Going back to the story of Funes the Memorious, his tragedy is that he can't forget. Everything is equally present, equally weighted, equally demanding of attention. He has no hierarchy of relevance, no framework for deciding what matters and what does not. He has, in a sense, infinite data and zero judgement.

The antidote to this isn't ignorance but what could be called strategic forgetting: a deliberate, principled practice of ignoring, or setting aside, information. Organisations might decide to do this not because the information is false, but because attending to it does not serve the organisation’s purpose at this moment.

To be clear, this is not an argument against evidence-based decision-making. Rather, it's an argument about what evidence-based decision-making actually requires. Organisations need not just evidence itself, but a framework for relevance, a prior understanding of what they are trying to achieve.

Without this kind of framework, more data just becomes more noise. AI tools, which excel at producing more data, can easily become “noise machines” if the organisation lacks the judgement to direct them.​

It's my belief that the organisations that will use AI most effectively are not the ones that generate the most analysis. Instead, the most effective ones will be those that can take a comprehensive AI-generated report and say with confidence that they'll act on two of the findings and set the rest aside.

The judgement is the thing that can't be automated. It's the product of mission clarity, institutional memory, contextual understanding – and the courage to be selective in an environment that rewards comprehensiveness.

What we've learned from this series

Beginning with a question about perception, the first post in this series asked: do you know you are inside a system? I explored how AI tools move from “optional” to “inescapable” and how governance mechanisms can accumulate without producing accountability.

The second post asked: can you see what the system is hiding from you? We examined how AI tools narrow the space of choices before the decision-maker arrives – and what it takes to recover awareness of the paths that were closed.

This post asks: can you still think clearly inside the system? I've argued that more data does not produce better decisions unless it is paired with the judgement to determine what matters. AI tools can erode that judgement even as they multiply the information available.

These three questions form a sequence:

  1. You can't recover your choices if you do not know you are inside a system.
  2. You can't think clearly about your choices if you're unable see them.
  3. Seeing them is not enough if you are overwhelmed by everything else the system shows you.

Borges spent his career writing about systems that are beautiful, total, and quietly inhuman: libraries, lotteries, infinite gardens, perfect memories. In each case, the system works exactly as designed, yet the humans inside it are diminished. The organisations that will cope best with AI are not the ones that adopt the most tools or generate the most data. They're the ones that retain the capacity to step back from the system, and ask whether it's serving their purpose.

At the end of the day, organisational capacity is not a technology problem; it's a leadership problem. It begins with the willingness to ask uncomfortable questions about how much of what your organisation does is truly its own.