Forced Futures
Why the backlash against AI is the reliable response to how we talk about it
At 4:12 a.m. on Friday, April 10, someone threw a Molotov cocktail at the gate of Sam Altman’s San Francisco home. The fire was small. No one was hurt. The attacker, a 20-year-old named Daniel Moreno-Gama, had traveled from Texas for the occasion. An hour later, police found him outside OpenAI’s headquarters, threatening to burn the building down with a jug of kerosene.1
In a document recovered during his arrest, Moreno-Gama wrote that he had chosen Altman as a target and listed other AI executives as additional ones. “If I am going to advocate for others to kill and commit crimes,” he wrote, “then I must lead by example.” Months earlier, in an online chat in January, he had floated the idea of “Luigi’ing some tech CEOs,” a reference to Luigi Mangione, who had shot the UnitedHealthcare CEO in late 2024. He said at the time the words should not be taken literally.2
When The Verge covered the attack, its editorial team had to moderate the comments. That detail was the one that stuck with me. The Verge is not a publication with a violence-celebrating audience. Their readers are normally anti-violence, normally anti-war. And they had to work to hold that line in their own thread.
Something has shifted. Tech CEOs, people who a few years ago sold out conferences with their forward-looking optimism and were held up as role-models for a generation, have landed somewhere near healthcare CEOs in the public moral imagination.
A diagnosis without a name
On their podcast this week, The Verge hosts David Pierce and Nilay Patel named something they did not have a term for. “There is this sense that all of these things are being forced upon me,” one of them said. “But that is precisely what Sam Altman has been saying about AI for years.” They described a double bind: AI is going to replace me, but without AI I don’t have a chance. How, they asked, would you not feel helpless?3
The Vergecast is a tech podcast, not a psychology paper. It did not reach for a technical term. It described an experience.
The polls say the same thing in numbers. Gallup surveyed 1,572 Americans aged 14 to 29 between late February and early March 2026. Compared with the same survey a year earlier, Gen Z’s excitement about AI had dropped 14 points, to 22 percent. Hopefulness had fallen 9 points, to 18 percent. Anger had risen 9 points, to 31 percent. Anxiety held steady at 42 percent. Among daily users of AI, the decline in positive feelings was as steep as in the overall group. The story is not “enthusiastic users versus fearful holdouts.” Even the users are turning.4
Stanford’s 2026 AI Index published the other half of the gap. 73 percent of AI experts expect AI to have a positive impact on how people do their jobs. 23 percent of the public expects the same. A fifty-point gap on a direct question about your own working life. Not AI in the abstract. Your job.5
These are not the numbers of a public that has not yet heard the pitch. They are the numbers of a public that has heard it clearly, for years, and is responding.
The substance behind that response — training data, labor, climate, product quality, the collapse of online trust and much more — is real, and not what this essay is about. I am looking at a layer on top: how the way AI is being pushed reliably shapes the response to it, regardless of whatever substance the push happens to meet.
The response has a name
The response has a name. In 1966, the social psychologist Jack Brehm described psychological reactance: the motivational counter-drive that kicks in when a person perceives a threat to their freedom of action.6 Reactance is an adaptive defense of autonomy, not stubbornness or irrationality. Tell someone they must do something, and they will want to do the opposite, not because of the content of the request, but because the form of it threatens their sense of self-determination.
The trigger is not what is being asked. The trigger is the loss of choice. When the trigger fires, it fires reliably.
Three ways to pull the trigger
Three features of how we talk about the future pull this trigger predictably.
The first is deterministic framing. “The AI revolution has begun.” “AGI within two years.” The future arrives on a schedule, is what these sentences say. There is nothing to decide and no alternative to weigh. In January 2025, Sam Altman wrote, “We are now confident we know how to build AGI as we have traditionally understood it.”7 Dario Amodei, writing in “Machines of Loving Grace,” said powerful AI “could come as early as 2026.”8 These sentences are presented as news reports from an inevitable future, not as one perspective among many.
The second is the imperative and the binary. “Adapt or die.” “Catch up or fall behind.” Not just a direction, but a demand that sorts you into winners and losers before the thing has even arrived.
The third is a subtler mechanism. Psychologists call it anticipatory reactance. The described future does not need to arrive for the response to fire. The announcement of an impending loss of freedom is enough. A future that never comes can still produce the push-back, because the resistance is triggered by the narrative, not the event. This is why pure prediction is never innocent. Tell someone their profession is about to disappear, and you have done something to them now.
The historian Mar Hicks, writing in Fast Company in August 2025, put this more sharply than any psychologist I have read:
“Saying a technology is inevitable and that it is going to determine how things historically develop puts so much power in the hands of the people who make the technology.”9
Inevitability is not a description of reality. It is a rhetorical move. Whoever says “it is coming either way” takes the conversation from everyone else. And everyone else responds with exactly the push-back the speaker can then dismiss as irrationality, nostalgia, or fear of the future.
The mirror
This same grammar does not only run in Silicon Valley.
It ran in the memo Shopify CEO Tobi Lütke sent his staff in April 2025, published on X because he expected it to leak anyway. “Reflexive AI usage is now a baseline expectation at Shopify,” he wrote. Managers, the memo continued, must now demonstrate why a job cannot be done by AI before they can hire a person for it.10 This is the voice of the market as natural law. The only open question is how quickly you comply.
It runs in the transformation deck at the next all-hands meeting. The hockey-stick projections. The disruption numbers carefully chosen to suggest that nothing else is possible. The “this is the direction we are going.”
It runs in the classroom. A teacher opens the careers unit with a slide titled “the jobs of 2035.” Half of these jobs will not exist, she says. The other half will require tools that have not been invented yet. The fifteen-year-olds nod politely. Some take notes. Some stare out the window. None of them asks where the number comes from, or which jobs she means, or how she knows. The announcement was not made for discussion. It was made to set the terms of their attention: to take something from them, even if that was not anyone’s conscious intention.
If you communicate about the future at all, and almost anyone with authority of any kind does, somewhere, to someone, then you are in this dynamic. On both sides of it. You have tensed up, inwardly or outwardly, when someone announced a future to you. You have probably also handed these same sentences to someone else. Without noticing. Not because you are a bad person. Because deterministic future-talk is coded, in our media and management culture, as clarity and as leadership.
Recognizing this in your own speech is the first step. Not out of guilt. Out of interest in what this rhetoric is costing you right now. And what it is costing is only visible if you count the second half of the response.
The quieter half
The Vergecast sees the Molotov. It sees the comments section. It sees the backlash in headlines.
The larger half of the response is quiet. It is the colleague who nods politely in the AI training session and never opens the tool afterwards. It is the co-worker who answers, “Yeah, for sure,” when leadership says the company is going all in on AI and then works exactly as she did before. It is the teenager who hears “this is the career landscape of your future” and mentally wanders off to something he actually cares about. It is the neighbor who says, “Yeah, interesting,” when the conversation turns to the future and then changes the subject.
This is the larger group. And the larger challenge for anyone trying to shape what happens next. It does not produce numbers. It does not write op-eds. It does not throw Molotov cocktails. It simply leaves the room, not physically but in every way that matters, while you think you have carried everyone with you.
Molotov and shrug are two responses to the same narrative. The first is loud and rare. The second is quiet and, in your organization, your classroom, and your family, almost always the majority.
A few weeks ago, I wrote about collapse narratives as self-fulfilling prophecies and about the exit strategies they produce: geographic, technological, psychological, and ideological.11 The quiet refusal in the face of AI is the same move at a smaller scale. A private exit from a conversation, not from a country. Each one individually reasonable. Taken together, they give the dominant narrative exactly what it wants.
The common root: agency
Reactance, the motivational state that fires when a future is being pushed onto us, is the thing underneath both the Molotov and the shrug. How it surfaces depends on the person and the context. What it protects is always the same. The word for it is “agency”: the felt sense that you can act within your own situation, that you can shape it and not just be subjected to it.
Reactance is the response to having your agency restricted. Its inverse is just as important. Positive future narratives create agency. They invite people to co-write the future they describe, rather than receive it as an announcement. They turn listeners into participants.
What does agency-creating future talk sound like? Not “AI is going to rewrite your role, so you had better learn the tools,” but “There are three or four plausible futures for this role, some of them better than where we are now and some of them worse. Which do we want to work toward? Which do we want to prevent? What would each of us need to make that possible?” Same topic, same urgency, a different psychological contract altogether. The first version turns the listener into someone whose freedom needs to be managed. The second turns them into someone whose judgment is needed.
The uncomfortable part is that most of us do not notice when our future talk is restricting agency. Deterministic sentences sound like clarity. “That is how it will be, no matter what we do” sounds like grown-up realism. Often it is the opposite: an abdication of your own shaping responsibility to an external inevitability, which is then passed along to everyone else you speak with. They pick up the protective reflex reliably. And we wonder why our communication is not landing.
The reflex fires in both directions. When a position we have already adopted — including one of refusal — gets qualified or complicated, the same defense of autonomy kicks in from that side too. What becomes identity-anchored becomes hard to negotiate, whether the anchor sits in enthusiasm or opposition. A valid critique can crystallize into a narrative that treats its own qualifications as attacks. The theory applies there as well.
What we are seeing with the AI narrative right now is what happens when this mechanism runs for years. Agency gets restricted. The narrative settles into common sense. At some point, something tips. The backlash the Vergecast is trying to describe has been building for years. It is accumulated reactance surfacing in both registers at once, the loud register that makes news and the quiet register that has been present much longer than anyone noticed. Molotov and shrug: the same reflex expressed in two ways. Both are attempts to reclaim agency that has been taken for too long.
The way out is not a communication hack
The way out is not a matter of finding better words. It is a different mode.
Future as invitation, not decree. Real participation, not sham participation where the decision is already made and everyone is gathered to legitimize it.
Practically, this means checking your own sentences for one thing: do they treat the other person as a subject of the future or as an object of it? Is the narrative forcing a future on them or opening a space for them to shape? This is less a language technique than an attitude toward the people you are speaking with. Do you trust them to co-write the future, or are they receivers of the inevitable?
The principle is not new. Coch and French showed it in 1948, in a textile factory in Virginia. Workers who were included in shaping a change to their production process held their output steady. Workers who received the identical change as an instruction dropped to two thirds of their previous output and stayed there for the thirty days of the study.12 Much of how we talk about the future ignores this anyway. Deterministic speech sounds more authoritative, and authority is easy to mistake for clarity.
Narratives that create agency do not trigger the protective reflex. They do not even activate it.
What story are you telling?
Which future story are you telling? As a leader, as a parent, as a colleague passing along what came from above, as a citizen with opinions about what happens next. Would you want to live in it yourself? Or does it leave you, at the end, with no agency of your own?
The response you get, loud protest or quiet absence, is not a verdict on the stubbornness of the people listening. It is a signal about the narrative itself.
Read the signal.
Further Reading
My Garden Note on the Center for Humane Technology. Tristan Harris and CHT are currently on a podcast tour for their Sundance documentary The AI Doc: Or How I Became an Apocaloptimist, which approaches the same question from the opposite end of the narrative spectrum. The reactance mechanism described here fires just as reliably when the forced future is catastrophic as when it is glorious.
Thanks to Harry Keller for valuable feedback on an earlier version.
“OpenAI CEO Sam Altman's San Francisco home attacked with Molotov cocktail,” Los Angeles Times, April 10, 2026. See also the FBI affidavit reporting that Moreno-Gama had authored a document claiming responsibility and listing additional AI-industry targets.
“Altman attack suspect proposed ‘Luigi’ing some tech CEOs,’” The Verge, April 16, 2026, citing Wall Street Journal reporting on Moreno-Gama’s online messages from January 2026.
“The ‘AI is inevitable’ trap,” The Vergecast, April 17, 2026.
“Gen Z’s AI Adoption Steady, but Skepticism Climbs,” Gallup, April 9, 2026. Survey conducted February 24 to March 4, 2026, n=1,572, ages 14 to 29, commissioned by the Walton Family Foundation and GSV Ventures.
Stanford Institute for Human-Centered Artificial Intelligence, “2026 AI Index Report: Public Opinion.”
Jack W. Brehm, A Theory of Psychological Reactance (Academic Press, 1966).
Sam Altman, “Reflections,” blog post, January 5, 2025.
Dario Amodei, “Machines of Loving Grace,” October 2024.
Mar Hicks, “Nothing about AI is inevitable: historian Mar Hicks on rejecting the future we're being sold,” Fast Company, August 21, 2025.
Tobi Lütke, “AI usage is now a baseline expectation,” internal Shopify memo published on X, April 7, 2025.
“Doomscrolling the Future: How collapse narratives become self-fulfilling prophecies,” The Futures Lens, March 29, 2026.
Lester Coch and John R. P. French Jr., “Overcoming Resistance to Change,” Human Relations, vol. 1, no. 4 (November 1948): 512–532.


