The Phantom AI Race
A metaphor that runs without competition
One of the founding hypotheses here at the Futures Lens is that the stories we tell ourselves about the future influence our decisions and behaviors in the present.
As Europeans, we often tell ourselves a specific story about the AI race. The Americans are ahead. The Chinese are coming on strong. And we are somewhere in the back of the pack, haggling over the fine print of AI regulation while the real thing gets built on the other side of the Atlantic and the Pacific. It does not matter how much money Brussels earmarks for sovereign compute or how many AI initiatives Berlin and Paris announce. We barely catch up. We do not pull ahead. By now, the story is so familiar that we no longer recognize it as a story. It feels like a description of reality.
I want to offer a different perspective: The race we believe we are losing might not exist.
Let me start with something that seems obvious but is often overlooked. A race has three defining features. First, a clearly defined goal. Second, a moment when it is over. Third, a clear winner. Take any one of those away, and the word “race” stops meaning anything in particular.
These three features remind me of something. They are the signature of a particular kind of game. The philosopher James P. Carse, in his strange and wonderful little book Finite and Infinite Games, divides games into two types. Finite games are played to win. They have fixed rules, clear boundaries, and an ending. Infinite games are played to keep the play going. The rules can be changed, the boundaries move, and the whole point is to prevent the game from ever coming to an end. Chess is finite. A long friendship is infinite. A court case is finite. A democracy, on a good day, is infinite. The question that Carse’s distinction makes possible is one that almost never gets asked in the current debate. Is AI development a finite game? Because if it isn’t, then every casual use of the word “race” is a small act of misdescription.
Where the metaphor comes from
It is worth asking where the metaphor came from in the first place. The images in our heads are Cold War images. Sputnik, Kennedy, the moon landing, and the arms buildup. But the metaphor is older than that, and its origin tells us something useful about how it works.
The first documented use of “arms race” as a description of a great-power technology competition cluster in the years 1906 to 1910. Britain had just introduced the HMS Dreadnought, a battleship so much more capable than anything else at sea that it instantly rendered the existing rivalry obsolete and forced a fresh start. Within two years, British newspapers and politicians were writing about a dreadnought arms race between Britain and Germany. The phrase was new. The structured geopolitical contest it described was new. The two came into being together.
What strikes me when I read the history is who coined the phrase, not who built the most ships. The admirals and the strategists did not use “race” as their operative term. In Berlin, Admiral Tirpitz worked with the language of risk theory. In London, the Admiralty thought in terms of force ratios and strategic calculation. The race metaphor was a product of the mass press. It was written in editorial rooms, shouted from front pages, and repeated by politicians who needed a simple line to mobilize a public that could not otherwise be made to care about naval budgets. “We want eight and we won’t wait” was the great British slogan of the 1908-09 naval scare, the campaign that forced the Asquith government to accelerate dreadnought construction.1
The function of the metaphor, from its very beginning, was to mobilize, not to describe. It created urgency. It made defense budgets politically defensible. It turned a technical and strategic situation into a story that the public could follow and feel something about. It also, arguably, poisoned Anglo-German relations more effectively than any diplomatic failure of the same period.
And then the metaphor traveled. Sputnik brought it back in 1957, and since then it has been the default vocabulary for any large technological rivalry. So worn in by now that we stop hearing it as a metaphor at all. But its function has not changed. The race metaphor is not a word for a thing that exists. It is a lever that makes a certain kind of political mobilization possible. It does today exactly what it did in 1910.
Where the metaphor breaks
So let’s actually check. Does AI development satisfy the three conditions of a race?
Take the goal first. What is AI racing toward? Ask any frontier lab, and the answer is AGI, artificial general intelligence. The Moon was a specific goal with a specific endpoint. AGI is a moving target with no agreed-upon definition, no benchmark that everyone accepts, and no criterion for when it has been reached. Some labs define it as the ability to perform most economically valuable work, which is a vague business definition rather than a technical one. Others treat it as a philosophical proposition about machine cognition. Others use it as a marketing term for the next major model release. The word is useful precisely because it is elastic. That is the opposite of what a finite game requires.
Take the ending next. When does the race end? After the Moon landing, the astronauts came home, the budgets dropped, and the engineers moved on to the Shuttle. A race has an after. AI development does not. It is iterative, open-ended, and structurally incapable of producing a moment where everyone agrees that we are done. And the target itself keeps moving. AGI five years ago meant one thing. Now that labs claim to be getting close to it, the real AGI has quietly been redefined to mean something else, further out. Every milestone becomes the baseline for the next milestone. There is no arrival, only more running. The actual development keeps going regardless.
Take the winner last. Who wins the race, and how would we know? AI capabilities are distributed. They live in multiple labs, in multiple countries, in open-source projects that cut across company and national lines. They emerge from the interaction of model architectures, training data, hardware, and deployment infrastructure, each of which is itself distributed. The podium moment is missing because there is no podium.
So why is the race metaphor so prevalent in the AI discourse to begin with? It has something to do with a quasi-religious belief held by its strongest proponents. They believe in the Intelligence Explosion: the idea that the first group to build an AGI will trigger recursive self-improvement, rapidly outpace everyone else, and lock in a permanent advantage.2 Inside that belief, the metaphor fits perfectly. There is a goal (AGI), an ending (the Singularity), and a winner (whoever gets there first). Outside that belief, it fits nothing.
The wager is the hidden bet inside the race framing, and everyone who uses the phrase is quietly making it without naming it. The lineage is traceable. Silicon Valley rationalism leads to Singularity theory, which leads to the Intelligence Explosion hypothesis, which leads to therefore it is a race. Each step sounds more neutral than the one before. By the end, the conclusion feels obvious. That is how ideology collapses into common sense, and the metaphor is how it travels.
Once you accept the frame, four things follow almost automatically. George Lakoff and Mark Johnson made the case in Metaphors We Live By: metaphors do not just describe how we think; they structure how we think.3 Accept the race, and you inherit four habits of thought.
You inherit zero-sum logic. A race consists of both winners and losers. Cooperation becomes a strategic mistake. Any advantage shared is an advantage lost.
You inherit speed over safety. Races reward the fastest, not the most careful. Within the race frame, regulation becomes a handicap. Slowing down becomes losing.
You inherit bilateral framing. Races are contests between two participants, and the most common version here is the United States versus China. The rest of the world disappears. So do civil society actors, smaller nations, open research communities, and anyone who does not fit the two-lane stage.
You inherit the spectator role. Races have runners and audiences. Citizens become observers of a geopolitical contest they never chose to watch. The only role available is watching and cheering. You can place a bet, but you cannot play.
These are structural consequences of accepting the metaphor, not interpretive choices. Take the frame, and these four come with it whether you wanted them or not.
Running against a phantom
The story gets stranger. It turns out that the race metaphor is a poor fit for AI development and, equally, a poor fit for the second runner. The opponent we think is chasing us, or that we think we are chasing, is not running in the same race. In some meaningful sense, it is not running at all.
When I read the Chinese AI debate, I encounter a different language. Xi Jinping, in the Politburo study sessions of 2025, talks about self-reliance (自立自强), about healthy and orderly development, and about application-oriented innovation. His officials, in turn, explicitly warn against disorderly competition and against a follow-the-crowd approach.4 These are the verbs of someone worried that his own country might be pulled into the American frame and waste its resources there. None of it is the language of a sprint. A Stanford DigiChina analysis sums up the Chinese position in a single line: “China views its competition with the United States as one that can be won via AI adoption instead of a race toward the elusive artificial general intelligence.”5 The operative Chinese keywords are AI+X industry integration, good-enough models, and flywheel. Phrases for embedding technology broadly across an economy, not for a sprint toward a finish line.
Which makes the absurdity complete. The United States is running in a race without a finish line, without an ending, without a clear winner, and without an opponent who believes they are running. The race rhetoric produces its own competitor. It has to, because without one, the metaphor loses its function. A race without a runner in the next lane would be hard to justify. So the next lane gets populated, and the runner in it gets ascribed motions that belong to a different sport entirely.
In 1910, we know who needed the phantom. The navy, the newspapers, the politicians who wanted bigger budgets. Who needs it today?
Tech companies need it. If we are in a race, speed is a virtue and regulation is a handicap. Safety concerns become obstacles. Every call for caution becomes, inside the frame, an act of self-sabotage. We would do more safety work, but our competitors won’t is a defensible line only if the competitors exist.
Deregulation advocates need it. The frame turns China into a permanent bogeyman, and any proposed rule can be answered with the other side won’t play by it. Tiffany C. Li has documented how this framing creates a technological determinism that makes regulation look futile when, in practice it is available and workable.6
Geopolitical hardliners need it. The frame militarizes a civilian technology. Vladimir Putin’s 2017 line, “whoever leads in AI will rule the world,” gets quoted as a warning. I read it more accurately as the moment when the metaphor began to fulfill itself. Once heads of state frame AI as a geopolitical weapon, it becomes one.
And then there is the case that shows you just how strong the frame is, because it cracks the one place you would expect it to hold. In February 2026, Anthropic revised its Responsible Scaling Policy. Anthropic had built its brand on being the safety lab, the one that would hold the line. The earlier version of the policy included a commitment to pause development if safety measures could not keep pace. The revised version removed that commitment. Chief Science Officer Jared Kaplan explained the change: “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead.” The policy itself now states that pausing while others advance “could result in a world that is less safe.”7
A safety commitment, dropped by reference to the race. The frame did exactly what the frame does. And Anthropic was the lab that was supposed to be different. If the safety-first lab can be bent by the metaphor, the metaphor is doing serious structural work.
What opens up
Take the frame away, and a different picture appears. No utopia, no counter-narrative, no heroic alternative. Just options that were there the whole time, hidden by a metaphor that kept our eyes pointed at a single imaginary finish line.
Regulation becomes design, not drag. Without the race, the handicap argument collapses, and regulation becomes visible again as what it is: a set of choices about the kind of technology we want to live with. The EU AI Act is a policy decision about risk, responsibility, and rights, not a concession to Beijing. It can be called that.
Cooperation becomes rational. Without the zero-sum frame, international collaboration on AI safety and governance is the obvious move, not an act of treason. Li calls this regulatory collaboration: countries working together on standards instead of racing one another to the bottom.
Pace becomes negotiable. Without the race, the argument for deploying unsafe systems because a competitor might stops making sense. Speed stops being a moral imperative and becomes a choice with consequences.
Citizens become participants. Without the spectator frame, democratic engagement with AI policy is the normal way that societies make decisions about technologies that affect everyone. Calling it naïve misreads it.
None of these options had to be invented. They just had to be made visible. The race metaphor is not a description of reality that we have to accept. It is a frame that we can refuse. And refusing it does not mean ignoring the real challenges of AI development. It means engaging with them on our own terms.
Inside the race frame, Europe cannot win. If the game is to build an AGI first, train the largest frontier model, or mobilize the most computing, then we are structurally and permanently behind. That is the only conclusion the frame allows. And the frame itself is the problem.
But frames are offers, not descriptions. And Carse has one more thing to say. He writes,
“Whoever plays, plays freely. Whoever must play, cannot play.”
The United States plays the race because it believes it has to, and it insists that China plays too. But China has already done the work of stepping out of the American frame. It looked at the race, declined to call it that, and wrote its own playbook: self-reliance, orderly development, good-enough models, and integration. Its self-description may be self-serving, but it is self-chosen. In that narrow sense, China serves as a model.
Europe is not there yet. We are still caught in the American story, often without noticing. We write our AI strategies by reference to theirs, measure our progress against their benchmarks, and worry about catching up because we have quietly accepted their question as ours. We have not yet done what China did: step back, look at the race, decline to call it that, and write our own playbook.
There are small movements in that direction. In the spring of 2026, Yann LeCun, one of the three Turing Prize winners for deep learning, left Meta to start Advanced Machine Intelligence Labs in Paris. A one-billion-dollar seed round, the largest in European history, was financed globally.8 What LeCun is building is not a better LLM. He has argued repeatedly that the LLM path is a dead end for AGI, and he is betting on world models and his Joint Embedding Predictive Architecture instead. He is not the hero of this story. He still believes in AGI, which keeps him inside a logic that assumes a goal. But he demonstrates something worth noticing: even inside the AGI fixation, there are more paths available than the race frame makes visible. And he chose to demonstrate those paths in Paris, not in San Francisco.
But LeCun is only a partial move. He is still inside the race, only on a different track. The more interesting question is what Europe could do if it stepped all the way out. What if there were never going to be an AGI? What if the finish line the race points at does not exist, and the sprint was always going to run forever? How would we approach AI then? What would we want to build, and for whom, and on what terms? Those are our questions to ask. We just have to stop waiting for permission from a race we never signed up for.
The defeatism I opened with was correct inside the story we told ourselves. The stories we tell ourselves about the future shape the present. When we identify this one as just one among many, the field is wider than we imagined.
Further Reading
James P. Carse: Finite and Infinite Games (Free Press, 1986). The short, strange, persistently useful book whose distinction carries this whole essay.
Tiffany C. Li: Ending the AI Race (Villanova Law Review, 2025). The most thorough legal and policy critique of race framing I know of.
Paul Scharre: Debunking the AI Arms Race Theory (Texas National Security Review, 2021). Still the clearest statement of why the arms-race analogy breaks down structurally.
Stimson Center: America is Running the Wrong AI Race (2026). The US-China mismatch argument made recently and crisply.
My Garden Note on The AI Race for a longer exploration with more sources and branches.
Elsewhere This Week
An interview I did with Daniela Rattunde for KreativBund (in German). On why the creative industries deserve to be treated as infrastructure for making alternative futures tangible. A companion to the closing question of this essay.
A livestream conversation with Markus Iofcea and Marcel Aberle (in German) about the exit strategies people reach for once they accept the collapse narrative: geographical, technological, psychological, ideological. A continuation of the discussion I started in Doomscrolling the Future.
For the history of the Anglo-German naval race and the role of the press in making it legible as a race, see the overview at the International Encyclopedia of the First World War and the Imperial War Museum’s account. The phrase “We want eight and we won’t wait” is the most cited example of how the popular press translated naval strategy into political pressure.
Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,“ VISION-21 Symposium, NASA Lewis Research Center, 1993. The Intelligence Explosion idea is older (I. J. Good, 1965), but Vinge is the version that entered Silicon Valley discourse.
George Lakoff and Mark Johnson: Metaphors We Live By (University of Chicago Press, 1980). It remains the foundational text on conceptual metaphor and is directly relevant whenever a debate is organized around a single image.
Xi Jinping’s AI remarks are summarized by the Center for Security and Emerging Technology at Georgetown in their report, Xi Politburo Collective Study on AI 2025. The warning against disorderly competition was widely noted in Chinese state media and picked up by Fortune, among others.
Stanford DigiChina, Xi's Message to the Politburo on AI. The quoted line is from their reading of the Politburo documents and is worth taking seriously as a description of Chinese self-understanding rather than a Western projection of it.
Tiffany C. Li, Ending the AI Race (see Further Reading). Her concept of regulatory collaboration as an alternative to race-to-the-bottom dynamics is the most useful single alternative to race framing I have come across.
Anthropic, Responsible Scaling Policy v3 (February 2026). Jared Kaplan was quoted on the rationale for removing the pause commitment in Time.
The founding, funding and focus of Advanced Machine Intelligence Labs are covered in Le Figaro and the Wikipedia entry. LeCun's long-standing criticism of the LLM paradigm is spread across his public talks and social media over the past several years.


