Open Thinkering

When AI tools give you choices but take your agency

Aerial shot of paths forking in a green space
Photo by Tara Scahill / Unsplash

This is the second in a series of posts using the fiction of Jorge Luis Borges to examine how AI is reshaping decision-making in mission-driven organisations. My first post in the series, The (AI) lottery is already running, explored how AI systems are becoming inescapable and how governance layers can add complexity without producing accountability. This post focuses on a different question: what happens to agency when the tools we rely on are shaping our choices before we know we are choosing?


In “The Garden of Forking Paths,” Jorge Luis Borges introduces a character who discovers that one of his ancestors created a novel that is also a labyrinth. In this labyrinth, time doesn't move in a single line; instead, at every moment of decision, all possible outcomes branch outward and exist simultaneously. For example, a character who dies on one path is still alive on another. Side A wins a war on one path, but loses it on another. Every possibility remains real and in play.

As a reader, it can be a difficult concept to grasp, and I'm sure Borges meant it to be. It's influenced everything from quantum theory to hypertext fiction. For the purpose of this post, however, what matters most is not the branching itself but what it reveals about the nature of choice. I'm particularly interested in the question of agency and whether you know which paths have been closed off before you arrived at the crossroads where you must make a decision.

AI as a 'pruning machine'

If we use the metaphor of the garden, one way of looking at AI systems that filter, rank, and recommend is as pruning machines. They take a large space of possibilities and narrow it to a manageable set. So, for example, search engines take the entire indexed web and give you the ten most relevant results. Hiring platforms take hundreds (or thousands) of applicants and come up with a shortlist. Social media feeds take the full output of everyone you follow and select what you see first.

I'm not disputing that this can be incredibly useful. The problem is not the pruning but rather that the pruning is invisible.

Let's take the example of the hiring platform. When you invite a candidate to interview from those shortlisted by an AI-assisted platform, you aren't choosing from all available candidates. You're choosing from a subset that has already been shaped by the platform's model of who you are and what you are likely to want. The model has trained on your past behaviour, and that of people like you, which means it is optimised to reinforce existing patterns.

This means that candidates who might be amazing for your organisation, but who are unusual or fit a different profile than you were expecting are quietly removed from view. It's important to note that you never see them as explicitly rejected, as they're just never presented to you.

This is how systems capture decision-making. The system learns what you have done and then serves you more of the same. Your organisation becomes like your social media feed: optimised to serve people like you. The longer you use it, the narrower the “corridor” becomes. You still feel like you are choosing but to go back to the original metaphor, you are doing so within a smaller and smaller garden – and the walls are invisible.

It's not the fault of the tool itself, as it's doing exactly what it was designed to do. But the organisation loses something it may not notice until it is gone.

Choice vs. agency

In the same way that the term 'AI' combines a number of different sub-fields (machine learning, predictive analytics, etc.), so the language we use around AI adoption tends to combine and elide a number of different things. For example, we talk a lot about choice but we don't talk much about agency:

  • Choice is selecting from options presented to you
  • Agency is shaping which options exist in the first place

Most AI tools give organisations more choice: they speed up drafting text, broaden search results, and surface patterns in data. This is useful, but these tools often reduce agency by deciding in advance the frame within which those choices appear. It's a bit like sitting down to eat a meal at a restaurant: the menu might look fantastic, but you didn't get to decide what was on it.

This distinction between choice and agency builds on something I mentioned in the first post in this series. In Borges' story 'The Lottery of Babylon' citizens are given the experience of participation and the feeling that they are part of an inclusive system. At the same time, though, the very substance of that participation is eroded. The Babylonians end up with more and more interactions with the Lottery, but less and less influence over what it does.

In other words: they had plenty of choice and very little agency.

It's not an exact parallel with AI-assisted organisational life, but it's close enough to be uncomfortable. Organisations using AI tools extensively may find themselves making more decisions faster across more domains, which is going to feel like progress. The tools can help accelerate the process of deciding, but they don't build capacity, nor do they help organisations make decisions differently.

3 diagnostic questions about agency

I'm definitely not arguing against using AI tools. I use them all the time. But using them for decision-making and governance processes feels different to using them for other purposes. Organisations should be using AI in this context with a particular kind of awareness – one that most organisations probably have not yet developed because the need for it is brand new.

I ended the first in this series with a diagnostic question: Where do your decisions end and the algorithm's begin? Can you tell? I don't think organisations need to get into the weeds of technical questions about how algorithms work (though those matter too) but should start with operational questions about how the tools shape their decisions.

For example:

  • Can you describe the last significant decision your organisation made without AI assistance? If every recent decision traces back to an algorithmic suggestion, dashboard insight, or set of AI-generated options then it's as if your organisation's decision-making process has been colonised.
  • Can you identify what the tool filtered out? When your hiring platform returned 12 results, what were the other 300? If your organisation can't answer questions like these, then your AI tools are locking you into a particular path
  • When was the last time your organisation made a significant decision that contradicted an algorithmic suggestion? This is probably the biggest test. The CEO of an organisation with real agency might say to their board: “the data points one way and we are going another, for reasons we can explain.” On the other hand, an organisation that has outsourced its agency will find that it never disagrees with the tools. This isn't because the tools are always right, but because the tools have gradually redefined what “right” even means.

These aren't easy questions to answer. They're not designed to shame organisations, but to help make visible a kind of influence that works precisely because it is invisible.

Widening the corridor

In the garden that Borges describes, all paths exist simultaneously. That's what makes it a labyrinth rather than a corridor. The tragedy of the story, of course (and of our lives) is that any individual within it can only walk one path at a time. We may never know what lay down the others.

The practical version of this tragedy is an organisation that has been using AI tools long enough that it can't imagine operating differently. Not because the old ways were better, but because the tools have so thoroughly shaped the organisation's processes, its assumptions, and its sense of what is possible, that alternatives are literally unthinkable. The paths are still there and still branch, but the organisation no longer sees the turnings.

So the work of recovering agency for organisations is the work of making those turnings visible again. This doesn't require abandoning AI tools, but building an organisational habit around asking, before every significant decision, what options the tools may have removed from consideration. Do any of the removed options may deserve a closer look?

This is a lot harder than it sounds, as humans can be cognitively lazy. I know I can be. So this is a process rather a one-off audit. It's the kind of discipline that's difficult to develop from inside the system – for the same reason that Borges' characters cannot see the shape of the labyrinth they are walking through.

As ever, let me know if you need any help with that external view. Are there any other questions you think would be useful to serve as a diagnostic tool?


The next post in this series will turn to another Borges story, Funes the Memorious, and the question of what happens when organisations drown in data. If the Lottery showed us we are inside a system, and the Garden showed us it is hiding our choices, this story will show us how it can paralyse our capacity to think clearly – even as it promises to make us 'smarter.'