Cultural Lead: Why AI Is Different
What happens when culture runs ahead of technology
I give talks about AI for a living. As a keynote speaker and foresight researcher, I’ve spent over a decade explaining how artificial intelligence is changing work and society. And the most telling moments are never during the talk itself. They come during the Q&A.
Someone raises their hand and asks, “If I learn how to use this, am I training my replacement inside the machine?”
The room goes quiet. Not because the question is new. Because everyone has been thinking it.
I hear versions of this at almost every event now. The questions rarely touch the technology itself. They’re about survival, identity and control. “What’s the minimum we need to do without getting deeper into this?” “Can we keep this as small as possible?” People don’t ask these questions about a new project management tool. They ask them about something that feels like an existential threat.
And sometimes the energy turns. The frustration and the anxiety about all this AI noise: it gets directed at me. I’m the one on stage talking about it, so I become the stand-in for the threat itself.
But these reactions have almost nothing to do with the specific technology I’ve just been discussing. They come from somewhere much deeper and much older. To understand where, we need to go back about a hundred years.
The pattern we all assume
In 1922, the sociologist William F. Ogburn published a book called Social Change with Respect to Culture and Original Nature. In it, he introduced a concept that still shapes how we think about technology and society: Cultural Lag.
The idea is straightforward. Technology (what Ogburn called “material culture”) moves fast. Society’s response (norms, laws, values, institutions) follows behind, sometimes decades later. The gap between the two creates friction, confusion, and harm. Ogburn saw technology as the primary motor of social change: the material world advances, and everything else adapts. Later scholars have called this a form of soft technological determinism. Technology doesn’t dictate outcomes mechanically, but it sets the pace, and society has to keep up.
We’ve seen this play out repeatedly. Social media platforms launched between 2004 and 2008. The Cambridge Analytica scandal broke in 2018. The EU Digital Services Act, the first serious regulatory framework, took effect in stages starting in 2023. And right now, the biggest policy debate around social media is age restrictions: whether children should be on these platforms at all. That’s a conversation about a technology that is over twenty years old. Two decades of lag, and we’re still catching up.
This is the pattern we’ve internalized. Technology leads. Culture follows. It feels like a law of nature.
But with AI, something strange happened.
The twist: Cultural Lead
The pattern reversed.
With AI, culture was first. Decades before the technology could deliver anything close to artificial intelligence, our stories, fears, and hopes about it were already deeply embedded in society. We’d been imagining AI for over a century before the technology began to catch up.
Ogburn called the normal pattern Cultural Lag. I’ve started calling what’s happening with AI the opposite: Cultural Lead. I think it names something real that the existing vocabulary misses.
The main criticism of Ogburn’s model has always been its unidirectional causality: technology leads, culture follows, end of story. AI doesn’t just challenge that assumption. It inverts it. With AI, culture didn’t merely precede the technology. It actively shaped what was built. What we’re looking at might be closer to cultural determinism: culture dictating the direction of technological development.
The evidence
We’ve been imagining artificial intelligence for a very long time. The line stretches back at least to Mary Shelley’s Frankenstein in 1818, the first major exploration of artificial life in fiction. From there, the cultural images kept accumulating: Metropolis (1927), HAL 9000 in 2001: A Space Odyssey (1968), and the Terminator (1984). Each generation added new layers to what “AI” meant in the collective imagination. And those layers didn’t contradict each other. They coexisted, giving us simultaneously the killer robot, the helpful assistant, the sentient companion, and the superintelligence that ends civilization. All these images are still active, all of them shaping how people respond to the same three letters.
These images didn’t stay in the movies. They migrated directly into the technology.
Jeff Bezos has explicitly stated that the talking computer on Star Trek served as inspiration for Amazon's Alexa. Both Apple and Google approached Majel Barrett, the actress who voiced the Star Trek computer, to voice their AI assistants before her death in 2008. They weren’t borrowing an aesthetic. They were trying to rebuild a piece of science fiction, down to the casting.
The most striking example came on May 13, 2024. That day, OpenAI demonstrated GPT-4o with its new voice capabilities. Sam Altman posted a single word on X: “her.”
The reference was immediate. Spike Jonze’s 2013 film Her, in which Scarlett Johansson voices an AI companion named Samantha. Altman had called the film “prophetic” in a September 2023 interview, naming it his favorite AI movie. Eight months before the demo, he was already telling the world which movie he wanted to make real.
OpenAI had approached Johansson to voice their AI. She refused. They launched anyway with a voice called “Sky” that sounded strikingly similar to hers. Johansson issued a public statement accusing them of deliberately replicating her voice. Her lawyers sent formal letters to OpenAI demanding an explanation. OpenAI pulled the voice.
A tech CEO tried to rebuild a movie literally, including the actress’s voice, against her will. This is not a case of science fiction that vaguely inspires technology. This is culture steering the technology, right down to the casting.
The collision
For most of the history of AI, the technology couldn’t deliver what culture had been imagining. That’s the core of Cultural Lead: the gap between expectation and reality ran in the opposite direction from what Ogburn described. Culture was ahead. Technology was behind.
When a group of researchers coined the term “Artificial Intelligence” at Dartmouth College in 1956, they were already playing catch-up to the cultural images. Their founding conjecture: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Herbert Simon predicted in 1960 that machines would do “any work that a man can do” within twenty years. Marvin Minsky told Life magazine in 1970 that machines with “the general intelligence of an average human being” were three to eight years away. The pattern was set early: promises calibrated not to the technology, but to the cultural expectations that already existed.
What followed was a cycle of raised expectations and collapse. The field went through two AI winters, periods where funding dried up because the technology couldn’t deliver on its promises. The Lighthill Report told the UK government in 1973 that “in no part of the field have the discoveries made so far produced the major impact that was then promised.” By the late 1980s, “AI” had become so toxic that researchers rebranded their work as “machine learning” or “informatics” just to keep their funding.
In between the winters, there were breakthroughs that made headlines: IBM’s Deep Blue beating Kasparov in 1997, IBM’s Watson winning Jeopardy! in 2011, Google DeepMind’s AlphaGo beat the world champion in 2016. Each was presented as proof that AI had arrived. Deep Blue and AlphaGo could each only play the single game they were built for. Watson’s apparent language comprehension turned out to be an illusion: when IBM tried to apply it to healthcare, the project quietly collapsed. But for most people, the nuance didn’t register. What registered was: the machines are winning. And yet you still couldn’t sit down and talk to one.
Then, in November 2022, OpenAI released ChatGPT. One million users in five days. A hundred million in two months. For the first time, anyone with an internet connection could interact with something that felt close to the AI they’d been imagining. It was the first significant overlap between cultural expectation and technological reality. Not a closing of the gap: the expectations still far exceed what the technology can deliver, which is why every hallucination and every failed use case feels like a betrayal. But enough overlap that the emotional charge accumulating since Frankenstein found something real to attach itself to.
Why the emotions run so deep
In futures research, we call this kind of deep, collectively held vision a “Future Imaginary”: a narrative so self-evident that we mistake it for reality. The AI Imaginary is the shared conviction that artificial intelligence is a transformative, possibly existential force. It manifests in different narratives and archetypes: the Terminator, the Star Trek computer, Samantha from Her, the job-stealing robot. These aren’t separate beliefs. They’re expressions of the same underlying imaginary, and they shape how people react to AI today, largely without knowing it.
Here’s what this looks like in practice. When I’m in a room with twenty people discussing AI, they think they’re talking about the same thing. They’re not. They share the same imaginary, but each person has latched onto a different narrative within it. For some, AI is simply ChatGPT: a text tool that is slightly better than autocomplete. For others, it’s a vague, overwhelming force they can’t quite grasp. For some, it’s Skynet. For others, it’s the companion from Her or the Star Trek computer.
When a team sits down to brainstorm AI use cases or develop an AI strategy, each person reacts to their own dominant narrative. The person carrying the Terminator archetype worries about losing control. The person carrying Her worries about emotional dependency and manipulation. The person who sees “just a tool” wonders what all the fuss is about. They talk past each other, and nobody realizes why. You don’t think, “I am now reacting to a science fiction movie I saw fifteen years ago.” You just feel the reaction.
This also explains something about one of the oldest stories we tell about technology: “Machines will take our jobs.” This narrative is much older than AI. It attached itself to the power loom in the 1810s, to automation fears in the 1960s, and to the internet in the 1990s. It’s a wandering narrative: a story structure that detaches from specific evidence and reattaches to whatever new technology appears, each generation encountering it as if it were new. 1
But Cultural Lead explains why this story hits differently with AI. With previous technologies, the fear was a reaction to something new appearing in the world. With AI, the fear was already there: cultivated by decades of Terminator movies and dystopian scenarios, pre-loaded and waiting for a technology to attach itself to. A 2019 study by the Allensbach Institut found that 76% of Germans cited the Terminator when asked which fictional AI they recognized. The emotional charge was in place long before anyone opened ChatGPT.
This is what I see organizations miss when they approach AI as a technology challenge. The resistance they encounter in their teams, the anxiety, the anger, the “how do we keep this as small as possible”: these are not reactions to a software tool. They are reactions to future images people have been carrying, unexamined, for decades.
Loosening the future
So what do we do with this?
In Critical Futures Studies, one of the core practices is what Sohail Inayatullah calls “loosening the future.” The goal: weakening the grip that a single future image has on our thinking so that alternatives become visible again.
I want to be honest about how hard this is. Future Imaginaries are, almost by definition, resistant to change. That’s what makes them imaginaries. You can’t tell someone to “think differently about AI” and expect the Terminator in their head to quietly step aside.
Inayatullah has a phrase for why this is so difficult:
“Often the present is a stranded asset, a psychic sunk cost. A great deal of emotional investment has been put into the present, and it no longer works, and thus we are unable and unwilling to make changes for a different future.”
We’ve invested so much of our identity, our fears, and our professional self-image into specific future narratives that letting go feels like losing something real.
The most powerful loosening tool is surprisingly simple: the plural. Futures instead of future. The moment people hear “futures,” something shifts. Oh. There isn’t just one. The one dominating my mind right now is just one of many possible versions of what comes next.
A strong future narrative narrows the options we can see. But only in our perception. The possibilities are still there. We just can’t perceive them when a single image fills the entire screen.
For individuals, this means something quite practical: recognizing your own AI image and noticing how it shapes your reactions. When you feel a strong emotional response to AI news, that’s a signal. The strength of the feeling often tells you more about the image running in the background than about the technology in front of you.
For organizations, the implication is uncomfortable. If you want to make good decisions about AI, the cultural work has to come before (or at least alongside) the technology work. That means creating spaces where people can surface and examine their AI images before you hand them a tool and a deadline. Most organizations skip this entirely. They wonder why their AI strategy meets so much resistance, and they blame the people for not being “open to change.”
Ogburn’s Cultural Lag assumes the hard work begins after the technology arrives: catch up, regulate, and adapt. Cultural Lead means the hard work was already underway long before anyone opened ChatGPT. The stories were already told. The fears were already in place. The images were already running. That’s why AI strategies that start with the technology are starting in the wrong place.
The real work begins with becoming aware of which future image is running in your head. With recognizing that the emotional intensity you feel about AI might not be about AI at all. It might be about a story you’ve been carrying for much longer than you realize.
The present, as Inayatullah writes, is “merely the fragile victory of one possible trajectory over other pathways.” That’s true for the present, and it’s true for the futures we project from here.
Loosening those images doesn’t give you a new answer. It gives you a different question: what other futures could there be?
Want me to talk to your organization about this?
New in the Futures Garden
Notes I added or updated in my digital garden this week:
Imagination Economy (new)
Effective Altruism (new)
Rationalists (rewritten)
TESCREAL (updated)
For German-speaking readers
Three recent appearances, all in German:
Apokalypse Not Now — on the WUNDERPANIK podcast: how futures research helps with AI panic
imagine all the agents — mit Johannes Kleske — a conversation with Klaas Bollhoefer about Future Imaginaries, the Imagination Economy, and what’s actually happening right now
KI als Zukunftsbild — my new column in Changement magazine on why our image of AI is outdated
The real history of this narrative is worth knowing: the Luddites of the 1810s, often dismissed as anti-technology zealots, were actually skilled craftspeople fighting for control over how new machines were deployed and who captured the benefits. As scholar Kevin Binfield summarized, “They just wanted machines that made high-quality goods, and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages.” The question was never, “Will the machine replace me?” but “Who decides how my work changes?” That question hasn’t changed in 200 years. (I explored the recurring patterns of AI-and-work narratives more extensively in the previous issue of this newsletter.)


