Open Thinkering

The (AI) lottery is already running

The (AI) lottery is already running
Photo by dylan nolte / Unsplash

I've been reading Labyrinths, a collection of short stories and essays by Jorge Luis Borges. One of these stories, 'The Lottery in Babylon', was published in 1941, describing a society in which a lottery starts life as a simple voluntary game.

Over time, however, the lottery becomes so pervasive and so opaque that no one can tell the difference between what it determines and what simply happens. The Company that administers it may or may not still exist, and citizens might no longer be free.

It's now 80+ years later and, to be quite honest, it felt less like I was consuming Borges' speculative fiction and more like I was reading a status report on life in the late 2020s.

Brief story arc

The lottery begins innocuously with a few people buying tickets. Winners collect their prizes and it is considered as nothing more than entertainment. The crucial change comes when the Company introduces penalties alongside rewards. Without giving away spoilers, this change raises the stakes a notch and, perversely, draws in more participants.

Over time, participation becomes compulsory. Not by some form of decree, but by the simple logic that 'opting out' carries costs that no rational person would accept.

A metaphor for AI

This feels familiar. AI tools arrived as things you could choose to try: chatbots, image generators, and the like. Pretty quickly, though, they've become things employers expect to be used, positioned as 'things your competitors are already using', and of course embedded in platforms on which we all depend.

That shift from you can use this to you can't afford not to happened quickly and without anyone holding a vote.

For nonprofits and social enterprises, this brings a particular kind of pressure – from funders who expect AI-assisted grant applications, to peer organisations becoming “more efficient” by automating their comms, to platforms reshaping algorithms around AI-generated content. The cost of non-participation rises, meaning that this AI lottery becomes less optional by the day.

In another story, Borges described something like the the logical endpoint to all this. 'The Library of Babel' tells the tale of a library containing every possible book: every true sentence and every false one, every coherent argument, and every piece of nonsense, all shelved side by side with no index and no distinction.

The problem for librarians is not that they lack information, but that they are buried in it. They have no means of separating what matters from what is just noise.

Our increasingly AI-saturated information environment is developing the same pathology. The volume of AI-generated content flowing through the channels that mission-driven organisations depend on is not producing a richer information commons. It's producing a Library of Babel: everything is available and nothing is findable.

The Company: everywhere and nowhere

Going back to 'The Lottery in Babylon', I find the most unsettling element to be the Company behind it. At first, it is a known entity, but over time, it becomes diffuse, invisible – and possibly even fictional.

The story's narrator reports that “the Company, with its habitual discretion, did not reply.” Some Babylonians believe the Company has been absorbed into reality itself, while others suspect it may have never existed at all.

When we think of who governs AI systems, it becomes a rather uncomfortable analogy. For example, when you apply for a job and a hiring algorithm screens out your application, who decided? When a content recommendation system shapes what millions of people believe about a political issue, who is responsible? When an “AI-enhanced” system incorrectly denies a benefits claim, who should take the blame?

The answer is that what's involved is rarely just a single organisation, a single person, or a single decision. It's layers upon layers of models trained on data selected by people implementing policies, written by committees interpreting regulations, drafted by legislators responding to lobbyists, who are funded by the very companies being regulated.

TL;DR: it's turtles all the way down, and not in a good way. The Company is everywhere and nowhere. Accountability quietly dissolves into architecture.

Complexity without accountability

In the story, the Company responds to complaints about unfairness by, guess what, adding more layers of lottery.

Secondary drawings start determining the outcomes of primary drawings. Then tertiary drawings adjudicate disputes about secondary ones. The system becomes so layered that it achieves a kind of perverse, Kafkaesque legitimacy. It's too complicated to be called “unfair” because no one can trace a causal chain long enough to identify where bias enters the system.

I think it's fair to say that AI governance is developing along similar lines. We now have ethics boards, “red-teaming” exercises, alignment research programmes, responsible AI frameworks, audit standards, and transparency reports. They are all well-intentioned, but each layer adds complexity.

After all of this work is done, the question remains: “who is accountable when this system harms someone?” It's as hard to answer as it was before the layers were added.

I grew up in a world where many problems (e.g. climate change) were the result of insincerity and inaction. But the risk here is not that these governance efforts are insincere or that no-one is doing anything. The risk is that their proliferation becomes a substitute for the thing they were meant to produce.

When an organisation can point to 57 oversight mechanisms and still not tell you who is responsible for a specific outcome, the mechanisms are functioning as camouflage, not as accountability.

Just to emphasise the point, I'll bring in a third story from Borges here which he called 'Tlön, Uqbar, Orbis Tertius'. In it, a secret society invents a fictional world, complete with its own philosophy, science, and language. Over time, objects from the fictional world begin appearing in reality. Eventually, reality itself starts conforming to the invented world's logic, and people forget that Tlön was ever fictional at all.

Something similar happens when organisations adopt AI systems built on particular assumptions about how decisions should be made. What counts as “evidence” and “optimisation” should be socially negotiated, not defined unilaterally by one or more companies.

We should not let a platform's logic (i.e. its invented world) begin to overwrite that of organisations or societies. We can already start to see hints of this where writing sounds the same because it's all shaped by the same AI writing tools. It used to be em-dashes, and now with Claude's 4.6 models it's the overuse of words like honestly, genuinely and meaningfully. It's a small way in which people and organisations adapt to the system rather than the other way around. After long enough, perhaps no one will remember that things were ever done differently...

An open question

Borges' story ends without resolution. The narrator, who is himself a product of the system he describes, cannot tell whether he is speaking freely or whether his account is itself an outcome of the lottery. it's like being in The Matrix: he cannot get outside the thing he is trying to describe.

We are already inside a system where AI mediates how we find information, how we communicate, how organisations make decisions, and how resources get allocated. So, for better or worse, the ship has sailed on the question of whether or not to adopt AI. The question is whether we can still perceive the boundary between our own choices and the system's outputs. And whether that boundary is meaningful if we cannot.

I work with organisations that are trying to make the world a better place. Their starting point is and should be the following simple diagnostic question: Where do your decisions end and the algorithm's begin? Can you tell?

If they can't, then it is a sign the Lottery is already running.