AI and the Futures of Work
A decade of hype cycles, failed predictions, and the stories we keep telling ourselves about AI and work
Last week, Matt Shumer1 published an article called “Something Big Is Happening” on X, and it went viral. CNN, TBPN, almost a hundred million views. It perfectly captures a narrative that’s been dominant for years. In it, he describes his personal experience with current AI models, especially in coding, and extrapolates from there to basically tell everyone: everything is changing, it’s going to be crazy, save every cent you have.
I posted a short, admittedly snarky reaction on LinkedIn: that this article gives you zero insight about the future with AI but nevertheless displays a current mindset among some people in tech and shows you how our attention economy works. The reactions were intense. People who shared Shumer’s experience didn’t understand my point. And people who didn’t know what to make of all this AI talk felt validated. Both reactions taught me a lot, and the conversations that followed helped me formulate my perspective much more precisely.
So here’s my attempt to lay it out. Not as one long argument, but as a collection of thoughts around the topic that I’ve been carrying with me.
I’m deep in this, too
Before I get into the critical stuff, let me be clear: I understand where Shumer comes from. I actually share a lot of his personal experience with AI tools. I’ve been using Claude Code in combination with Obsidian for more than half a year now. I’ve developed my own knowledge database, my own CRM, and my own task management tools. All systems that run my daily work. In the last issue of this newsletter, I wrote about becoming “voice-pilled” because I spend so much time now speaking and having what I say transcribed. I’ve been deep in the weeds with OpenClaw and the communities around these tools. I can’t code, but I can build things now that were not possible for me before.
So my criticism is not coming from the sidelines. It’s coming from inside the practice.
What stories about the future actually tell you
As a critical futurist, I distinguish between present futures and future presents. Present futures are the stories we tell ourselves and others about the future. We do this usually with an agenda, because we want to invoke a certain future to gain an advantage. Future presents are the actual moments in the future. They simply do not exist yet.
I’m only interested in present futures because you can learn a lot about the present from listening to stories about the future. Just as reading science fiction predicts very little about the future, it reveals what we project into it based on our current problems.
Shumer’s article serves as a prime example of a present future. He has an experience in the present (AI tools getting much better at coding), and he immediately extrapolates it into a story about the future of all work. That story tells you how he perceives today. It tells you very little about what tomorrow will look like.
And his big fallacy is to think that his personal experience in a very specific field is going to roll out to the rest of the world. I’ve been researching futures of work, and especially futures of work and AI, for almost 15 years now. I’ve seen this same pattern again and again. It never works out like that because work is much, much more complex.
The hype cycle repeats
In 2013, I gave my first talk at Republica in Berlin about AI and the futures of work. Back then, the discourse was just starting. Before that, the idea was always that machines could take over physical work. In 2012, the conversation shifted: maybe AI and algorithms might actually become powerful enough to take over mental work. Since then, I've been on this beat.
In 2016, Google’s AlphaGo beat the best Go players in the world. I remember vividly: the articles were all over the media. “It’s over. The machines have won. It’s only a matter of time until they replace us all.”
It didn’t happen. And then ChatGPT launched in late 2022. Instantly the same articles: “Do you see what AI can do now? This changes everything!” Mainstream media and tech people expected exponential development toward AI taking over jobs. That was more than three years ago.
Here’s the thing that got me slightly cynical: if you observe this pattern for more than a decade, you start to ask yourself, “Really, is it coming? Because it should be here by now.” Every couple of months, the same article gets published in one of the German dailies or weeklies. “Here are some examples of what AI can do now. They’re coming for your job.” The specifics change. The structure doesn’t.
And the part that really annoys me with articles like Shumer’s is that they pretend all the other predictions that came before, for decades, don’t exist. We keep making this mistake. We predict something, it doesn’t happen, and then we predict again, adding, “But this time it’s really different.” It usually isn’t. Or if it is, it’s different in a way we didn’t quite figure out, because we tend to look at the future with how we perceive the past.2
The narrative even works in reverse: whenever somebody is fired and it’s said it’s due to AI, it usually is a cover story. The real reasons are overcapacity, restructuring, cost-cutting. But because the dominant narrative leads people to expect AI to replace jobs, companies can use “AI” as a convenient excuse. Society hears it and goes, “Yeah, of course, now it’s happening.” The narrative becomes a self-fulfilling justification for decisions that have little to do with it.
The work paradox
In Germany right now, we desperately need more workers. The government is trying to convince us all that we need to work more. Many people have already experienced burnout due to excessive workload. But at the same time, a growing number of employees are afraid that they’re soon going to be replaced by AI.
How does that work? How are we supposed to go from “we should all work more” to “AI has replaced us all” in an instant?
And on a more personal level: I don’t know anyone who’s currently working with AI who’s doing less work. On the contrary. I’ve been deep in the communities around Claude Code and similar tools over the last few weeks. What I consistently observe is that people who are using these AI tools are working even more than before. There are articles from developers (and HBR) who talk about burnout, about how intense it is to work with these tools, about not having energy anymore. The same thing I see from many people in this field: “I’m not getting enough sleep. I’m eating badly. I really need to get away from the computer.”
There’s actually a concept for why efficiency tools make us work more, not less: Jevons’ paradox. In the 19th century, the economist William Stanley Jevons found that when technological improvements made coal use more efficient, total coal consumption went up, not down. Making something more efficient doesn’t reduce demand. It increases it because suddenly there are more use cases than before.
I keep coming back to Jevons‘ paradox because I think it is the insight we constantly miss when looking at work. We see the current amount of work, we see tools that can make it more efficient and faster, and we think work will decrease. But Jevons’ paradox has taught us again and again: when things become more efficient, we just consume more of them. That’s the driver that’s so often missing from our extrapolations about the futures of work with AI.
Future narratives and the reactance problem
My observation over the last decade is that future narratives like Shumer’s don’t draw people in. They trigger their reactance and push them deeper into the trenches.
Reactance is the pushback you feel when someone says you must do something or lose your job. The instinctive response: “I don’t want this. I don’t care. Go away with this.” If you tell me everything is a “game changer,” my first impulse is to turn away because it sounds enormous. It takes away my agency.
I’ve observed this constantly. The people who pushed back against my LinkedIn post were the ones who share Shumer’s experience. They wanted to use his article to convince their friends and colleagues: “See, he’s writing that you have to look at this!” I’ve seen this backfire too many times.
L.M. Sacasas refers to the Borg Complex, after the Star Trek aliens who always say, “Resistance is futile.” The spirit of the Borg lives in tech companies and pundits who prod everyone they deem slow on the technological uptake into assimilation. Sacasas even has a symptom list: makes grandiose but unsupported claims for technology. Pays lip service to, but ultimately dismisses genuine concerns. Equates resistance or caution to reactionary nostalgia. Announces the bleak future for those who refuse to assimilate.
Sound familiar?
This is the core insight I want to share: these FOMO-driven future narratives have the opposite effect of the one intended. If you want to give people more agency in shaping the future of work, the overhyped “everything changes, don’t get left behind” framing does the opposite. Fear is a good activator but a bad motivator. People share the article, but does it actually get anyone to open Claude and experiment? Or does it just make them more anxious and more likely to either freeze or push back?
And I think this is what we’re starting to see: a strong counter-movement to AI forming. All of this hype is leading us away from the benefits that AI tools can actually bring. Because we too easily buy into the narrative that “resistance is futile,” that we should just accept that AI is the future and go along.
I don’t want that. I think AI is changing things, but I want society to shape this transition according to its values. The question I keep asking is, how can we use the best of this technology but with the values we have as a society and the way we want to live in this world? How can people gain more agency in shaping the future, instead of having it dictated to them?
The attention economy makes it worse
The dynamics of the attention economy make it so hard for people to actually deal with AI tools in a rational way. The hype surrounding these tools is overwhelming. If you want to look on YouTube for videos explaining how to try out Claude Code, most of them have titles like “This Changes Everything! AGAIN!” and “I’m Making a Million Dollars in Just a Week.”
What happens then is that people compare their experience to the hype and come up short. They think, “Yeah, that’s not the game changer everybody says it is.” As a result, they often abandon the tools once more.
There’s a deep irony in Shumer’s article. Buried inside is actually useful advice: the models have gotten much better, you should try them out, and if you’ve only tested them a couple of months ago, you should play with them again because they’ve changed so much. If he had written an article that simply stated, “I’m really amazed at what current models can do. Make sure you try them out, too, so you have a current understanding of what’s possible.” I probably would have shared it. But that wouldn’t have gotten him hundreds of millions of views and onto CNN. That’s the attention economy at work.
The attention economy didn’t amplify the useful part. It amplified the FOMO.
It changes the system, not just the tasks
Now, I don’t want you to walk away from this thinking it’s all hype and you don’t have to do anything. That is absolutely the wrong takeaway.
I’m convinced that AI is going to change work fundamentally in many places. But it’s going to take much longer, it’s going to be so much weirder, and it’s going to be so much more unexpected than today’s predictions suggest.
The great fallacy with AI and work is that we only ever look at the existing system and speculate how AI might accelerate and automate it in the short term. We only imagine existing jobs disappearing. We have a hard time imagining new jobs emerging that wouldn’t make any sense in the current system but would become rather obvious once the system adapts.
I can see it in my own work. I can’t code. But I now do pull requests on GitHub; I’ve built my own CRM and my own task management system. These are not tasks that got automated. These are capabilities that didn’t exist for me before. And that changes the whole system of what my work looks like.
What we’re living through is not a crisis of who does the work, but a transition in what the work is.
And this is exactly what Shumer’s article misses. He says, “I had Claude build an app for me while I stepped away from the computer.” He sees the new capability. But he doesn’t ask what it means for the system. Whether it’s a good app. Whether it sells. Whether anyone wants it. He only talks about the tinkering and the experiment part. But experiments are not what you base your business on. The intriguing question isn’t what AI can do. It’s what new kinds of work and value emerge once things shift.
So what should you do?
Here’s the funny part. My actual advice is surprisingly close to what Shumer is saying at his core: get your hands dirty with these tools. Explore. Play. Tinker.
But without the fear. Without the FOMO. Without the “save every cent because the robots are coming.”
I keep (half-)joking that this era is more like 1999 with the internet. Some people tell me they feel like they’re behind on AI, like they might have already missed the train. I get that feeling. But imagine someone in 1999 going, “This internet thing, I’ve missed it. Not worth looking into anymore.” We’d smile at that today, because the things that ended up mattering the most (social media, smartphones, streaming, remote work) hadn’t even been invented yet.
If we’re truly at a 1999-equivalent moment with AI, then the things that will actually matter haven’t been built yet. Feeling like you’ve already missed it is the FOMO talking. And FOMO is a product of the hype, not a reflection of reality.
So: take away the hype. Take away the future narratives. Explore AI as a normal technology. Be playful. Try things out. Build prototypes. But don’t compare it to “this has to change everything.” Just find out for yourself how it can be helpful, and keep learning about it. That will actually help you, and society, to be much more reflective and prepared for whatever comes next.
And if you want to bring others along, try a different approach. Instead of saying, “This is changing everything, and you’re already behind,” try, “I’m really fascinated by what’s happening here. Can I help you try it out too?”
That’s the best way to shape the future instead of just having it happen to you.
Special thanks to Katja Nettesheim and Stephan Thiel for their questions.
Reading list
This is what I love about articles like Shumer’s: they bring out the best writing from others. Here are some pieces I can highly recommend:
Why I’m not worried about AI job loss by David Oks
The Singularity Is Going Viral by John Herrman
A.I.’s Pandemic Moment by Max Read
Tool Shaped Objects by Will Manidis
Matt Shumer is a young AI entrepreneur and CEO of HyperWrite. In 2024, he claimed his AI model Reflection 70B had achieved top benchmark results, which independent researchers could not replicate. He later apologized for getting “ahead of himself.”
This is why Shumer’s comparison to the surprise of COVID at the beginning of 2020 doesn’t make sense at all. I’d argue it’s the exact opposite. COVID was genuinely unexpected; only a few experts had been warning about pandemic risks. For decades, thousands of articles, books, TV shows, and talks have discussed AI replacing jobs. His article got so much attention not because it’s a wake-up call nobody heard before, but because it confirms what everyone already expects.



Very helpful to read an analysis based on a longer time-frame. And I can indeed appreciate with less tension the invite to experiment with AI when put in this perspective. Thanks also for the reading list!
Why is it that "everywhere" has become shorthand for everything with some modicum of general popularity?
There is no winning the hype cycles and failed predictions if we cannot resist our own hyperbole.