The Missing Scenario
What a second future does that a single one never can
This week, a scenario went viral. Citrini Research published “The 2028 Global Intelligence Crisis,” a speculative scenario written as a retrospective from the year 2028. In it, AI doesn’t just automate tasks: it triggers what the authors call an “Intelligence Displacement Spiral.” Companies cut white-collar workers, spending collapses, tax revenue drops, the mortgage market cracks, and there is no escape sector because AI keeps improving everywhere at once. Unemployment hits 10.2%. The S&P drops 38%.
The scenario was intended as an investor stress test. The authors said so explicitly. James Van Geelen, one of the co-authors, told the Odd Lots podcast he'd put the probability at 10 to 15 percent. But the markets didn't read the footnotes. IBM fell roughly 13%, driven in part by the scenario's impact.1 A prediction market on Kalshi appeared overnight, trading the „Citrini scenario“ at 11.6%. Citadel Securities published a rebuttal, and stocks recovered. Tracy Alloway put it precisely: „We shouldn't be in an environment where a single think piece causes a broad sell-off.“
I agree with Alloway. But as a critical futurist, I’m interested in a different question: why did this happen?
Here’s the short answer: the moment a concrete scenario lands in an environment of anxiety, it stops being a scenario and becomes a prediction.
This is something I’ve seen in my own work, in dozens of scenario workshops. The participants arrive with a fixed image of the future in their heads. It’s rarely a clearly articulated one, more of a background hum: a sense that AI will take over, that things will get worse, that the future is already decided. When they then encounter a detailed scenario that matches that hum, it clicks into place. Confirmation bias does the rest.
This keeps happening. AI doom scenarios go viral because they make the dominant narrative more concrete. They give shape to a feeling people already carry. And that is precisely the problem.
A detailed rebuttal is one valid response. But there is something else that rebuttals can’t do: place a second scenario alongside.
I’ve taught scenario methods for over a decade. The most important moment in every workshop comes when the groups have each developed their scenario and all four of them go up on the wall for the first time. The energy in the room changes. People sit up straighter and lean forward. Their eyes widen. They suddenly understand what the person at the front of the room has been trying to explain all along: that all of these are plausible directions. That which decision pushes them in which direction. The realization dawns on them that they possess choices and alternatives, each with their own set of consequences. That the future is not determined.
I know people who still bring up that moment years later, because it changed how they think about the future. One scenario narrows the view. Multiple scenarios open it. And only when you see multiple futures side by side do scenarios become what they are supposed to be: tools for thinking, not for fearing.
Right now, there is one AI narrative dominating the conversation. It asks a specific question: what happens when AI automates existing work? The Intelligence Displacement Spiral is one answer to that question, and a plausible one. I’m not going to argue that Citrini is wrong.
I’m going to ask a different question: what happens when AI restructures systems?
Citrini’s blind spot
Citrini is not a bad scenario. It is an incomplete one.
The scenario's weakness is the question it poses. Citrini asks, "Which existing jobs can AI replace?” And then it extrapolates linearly. More AI capability, more jobs replaced, more spending lost, downward spiral. Sangeet Paul Choudary, in his book Reshuffle, calls this the “intelligence distraction”: measuring AI’s impact by mapping it onto existing human tasks.
There’s a formulation I keep coming back to: we imagine the future as today, only more extreme. Citrini takes the current economic system and turns up the AI dial: faster automation, cheaper inference, and more displacement. It never asks how the system itself changes. And that is where the scenario stops short.
Consider transport in 1955. If you asked, “How many horse carts does the truck replace?” you’d get a technically reasonable answer. But you’d completely miss supermarkets, suburbs, and global supply chains. The truck didn’t replace horses. It restructured the logic of trade, settlement, and consumption.
Or consider agriculture. In 1800, roughly three quarters of the American workforce worked the land. Today it’s under 2%. The other 73% are not unemployed. They work as air traffic controllers, software engineers, radiologists, film editors, and industrial designers: professions that would have been incomprehensible to a farmer in 1800, because those jobs only make sense within a system that didn’t yet exist. The farmer could imagine more efficient plows. They could not imagine a profession dedicated to ensuring aircraft safely sequence their landings.
Citrini’s “Intelligence Displacement Spiral” assumes displaced workers have nowhere to go because AI improves across all domains. That’s true if you think in terms of tasks. If you ask, "Which existing jobs can AI do?” you’ll always end up with fewer jobs, because you’re only counting what disappears. You’re never counting what emerges, because the new things only become visible after the system shifts.
The same sectors, a different lens
Let me take three sectors from Citrini’s scenario and look at them through a different lens. I’ll keep the same AI capabilities and the same companies under pressure. But I’ll ask a different question: what new system emerges?
Software and SaaS. Citrini says agentic coding tools make SaaS replicable. Customers build their own tools. Prices collapse. Zendesk defaults on billions in debt from its leveraged buyout.
The first part is plausible. Simple workflow tools will become replicable, and established products will break apart as old constraints disappear. Anyone who’s watched a platform ecosystem unravel knows what this looks like: unbundling. But unbundling is always followed by rebundling, a reconfiguration around new constraints. When every company can build its own basic tools, a new bottleneck appears: who makes all those tools work together? Who guarantees outcomes when the software is assembled from AI-generated components?
In this scenario, SaaS wouldn’t disappear. It would shift from selling software licenses to guaranteeing outcomes and coordinating the mess underneath. The Zendesk of 2028 might not exist. But the problem Zendesk solves (coordinating customer interactions at scale) wouldn’t vanish just because the tool layer becomes cheap. The coordination layer would become more valuable.
Labor market. Citrini projects 10.2% unemployment with no escape sector. That number makes sense if you only count tasks that disappear. But when AI reduces coordination costs, things become possible that were too expensive before.
In 1937, the economist Ronald Coase asked why companies exist at all. His answer: because coordinating through the market is expensive. When external coordination gets cheaper, organizations get smaller, and the threshold for starting a viable business drops. There is data to support this: solo-founded startups went from 17% of all new startups in 2017 to over 36% by mid-2025, according to Carta’s Solo Founders Report. Solo entrepreneurs and small teams are already doing things that required entire departments three years ago, because the administrative cost of running an operation is shrinking.
The economy doesn’t just shift who does the work. It shifts how work is organized. New kinds of work appear around new constraints: ensuring the quality of AI-generated outputs and orchestrating systems where information is abundant but trust is scarce. This is the pattern from every major economic transition: the new jobs are not improved versions of the old ones. They are different kinds of work that only make sense inside the new system.
Consumer economy and payments. Citrini’s scenario has AI agents bypassing intermediaries. Visa and Mastercard lose interchange revenue as stablecoins and agent-to-agent transactions route around the card networks.
Old intermediation dissolves. That part is likely right. But intermediation wouldn’t disappear: it would migrate. When AI agents shop for me, new questions arise that didn’t matter before. If neither the buyer nor the seller is human, the question of who guarantees quality and who takes the hit when things go wrong doesn’t disappear. It gets harder. And when there are suddenly ten thousand options where there used to be ten, someone has to do the filtering. The new intermediaries might look nothing like Visa: agent reputation services that track reliability, outcome insurance for automated transactions, and curation layers that filter infinite choices down to the ones that match what you want.
The old toll booths would close, and new ones would open in different places.
What Citrini misses entirely
The sectors above show how existing industries look different through a system lens. But the more intriguing part is what becomes visible only when you stop asking about existing jobs and start asking about new systems.
Constraint migration. When AI makes knowledge abundant, where does value go? It migrates to the constraints that AI doesn’t solve.
This shift is already visible. Consider the sommelier. Wine knowledge used to be scarce and valuable. Today, anyone with a phone can look up tasting notes, regional characteristics, and food pairings. The sommelier should be obsolete. They’re not. What they provide was never really information in the first place. It’s confidence in a moment of uncertainty. The value migrated from knowledge delivery to judgment and trust.
This pattern will repeat across professions. When expertise becomes cheap (and AI is making expertise cheap very fast), value moves to judgment and accountability. Who certifies that an AI-generated legal document is correct? Who takes responsibility when an AI-coordinated supply chain breaks? These are tasks that only exist because AI exists.
The building-block economy. When capabilities become modular, rentable, recombinable, and scalable, founding costs drop and experimentation speeds up. Companies can be assembled from pre-existing components: AI agents, specialized knowledge modules, network access, and cloud infrastructure. The cost of testing a business idea approaches zero. Scaling a working one significantly reduces its cost.
Citrini describes a Displacement Spiral: each wave of automation triggers the next. But when founding costs drop, the opposite dynamic becomes possible: a Creation Spiral.
Lower coordination costs mean more business experiments. More experiments produce more new businesses. Those businesses create demand for services that didn’t exist before: someone to verify that the AI output is correct, someone to vouch that the provider is real. That demand creates jobs and spending, which fund the next round of experiments. The spiral would produce new types of work that we can’t yet name, because they only make sense inside a system that is still forming.
This is not hypothetical. The solo-founder wave, from 17% to over 36% of all startups in eight years, is an early signal of what the first turn of this spiral looks like.
Loosening the future
Citrini’s scenario is plausible. The alternative I’ve sketched here is also plausible. Neither is a prediction. Neither will happen the way it’s described. Reality will be messier, slower in some places, faster in others, and full of things neither scenario anticipates.
That is the point.
When you read a single scenario, the question you ask yourself is, “Will this happen?” When you read two scenarios side by side, you stop asking, "Will this happen?” You start asking, "What pushes things one way or the other?” And what can I influence?
Sohail Inayatullah calls this “loosening the future”: weakening the grip that a single future image has on our thinking so that alternatives become visible again. I wrote about this in the previous issue.
An alternative scenario forces specific questions. Is the negative outcome really as determined as it feels? What would need to happen for the positive version to play out?
The Creation Spiral doesn’t happen automatically. AI tools need to be broadly accessible: if only large corporations can afford the best models, the building-block economy stays locked. Education systems need to prepare people for a landscape where the jobs themselves keep changing. Regulation has to balance protecting workers during the transition with leaving room for the experiments that create new work. And governments need the political will to shape this actively.
None of that is guaranteed. Every one of these is a fork. And which way things go depends on choices that haven’t been made yet.
And even these two scenarios together are incomplete. What would a third look like? A fourth? What aspects of AI’s impact are not covered by either scenario? The moment you ask these questions, you’ve already left the single-scenario trap. You’re thinking in possibilities, not predictions. The exploration space gets wider, not narrower.
Scenarios are not forecasts. They are tools for seeing what you can do. A single scenario narrows that view. A second one cracks it open.
Want me to talk with your organization about this? Write me an email at johannes@kleske.de or just hit reply.
Supplementary reading
Two pieces on the Citrini scenario that come at it from different angles:
Adrian Monck: Why the Citrini AI crisis won’t happen — argues the scenario gets the capability curve wrong. AI follows an S-curve, and most AI pilots fail at implementation. A classic rebuttal: the premises are off. My piece above accepts the premises and asks a different question.
Paul Graham Raven: Intentions and inversions — frames Citrini as evidence of narrative’s material power: a fiction that moved markets. If you’re interested in why stories have this kind of force, start here.
New in the Futures Garden
Notes I added or updated in my digital garden this week:
Documentation as Infrastructure (updated)
And in part by Anthropic’s announcement of AI-powered COBOL modernization the same week.



This phenomenon also echoes what I've experienced working with big data and analytics teams and the executives who depend upon them. The human condition is so desperate for certainty, we'll seize mid-points of expected normal probability distributions and linearly extrapolate them into unrelated metrics with the confidence of three or more significant digits. I've bruised my palms for the number of times I've had to slap executive hands for doing this.
Worse, systems are completely thrown out the window when the first seedling of a data trend appears and it becomes a scalar rule for the universe throughout all time and scales. Never mind that there might be systemic imbalances and pivots that move all the pieces or create great resistances along the way.