4 min read

Redefining Futures: From AI Myths to the Performativity of Dystopia

Unpacking AI's Evolution, Debating Dystopian Narratives, and the Search for a New Term in Futures Thinking
Redefining Futures: From AI Myths to the Performativity of Dystopia

Welcome back.

Here are a couple of thoughts from this week.

1.

What do I call myself – Part II

I realised something I had never recognised almost immediately after sending the last issue. While I reject the term ‘futurism’ due to its fascist connotations, I never realised the art movement itself also used ‘futurist.’ I then discovered the Wikipedia page for ‘futurist’, which detailed a myriad of contexts and applications for the term spanning over 150 years.

The Oxford English Dictionary identifies the earliest use of the term futurism in English as 1842, to refer, in a theological context, to the Christian eschatological tendency of that time.

I continually ponder what the next term could be that describes the evolving direction of future work. ‘Foresight practitioners’ and ‘futures thinkers’ (or ‘futures thinking facilitators’) focus too much on the method, and I'm less interested in moderating workshops. ‘Futures researcher’ has too strong an academic connotation for the reality of my work.

The term should be more focused on the outcome of the work. I don't consider my job to be done when the scenarios are delivered, even if that is what most foresight work has been focussed on over the last 70 years and thus has an abysmal track record of impact. As professionals in this field, we should focus more on what precedes and follows a typical foresight process. We need to determine when and where investigating futures is beneficial and how to integrate these insights into a continuous process. Am I more of a strategic designer with a foresight toolset?

To be continued …

2.

When we say “AI” …

I received a report about artificial intelligence from an investor. The first slide has the headline “We're at Day 1 of AI.” Below it, there's a timeline starting from 1990. The first line begins in 1990 and is labelled “Internet.” The second line starts in 2005 and is labelled “Smartphone.” The final line is labelled “AI” and begins in—I kid you not—2020.

In recent months, I've frequently discussed generative AI with media companies, and two common misunderstandings have continually arisen. Firstly, there is a tendency to equate generative AI with AI in general. Secondly, the perception that AI is a surprising technological revolution emerging seemingly from nowhere. These conclusions are understandable if one only follows the current hype. This includes claims like the ‘fastest consumer adoption of a product ever,’ among others. Therefore, I usually begin my talks with a brief history of AI, spanning decades, followed by examples of AI technologies used by media companies for over a decade, such as ‘robot journalism,’ which predates ChatGPT.

‘Previously on this hype …’ should be a slide in any investor deck, so to speak. But what seems to be happening here is that the term ‘artificial intelligence’ is transforming. In the last decade, it has primarily evolved to encompass machine learning and deep learning (recall the Alpha Go milestone and the accompanying ‘now it's over’ sentiment?). These terms increasingly replaced the umbrella term ‘AI’ in communications, pitches, and similar contexts. Machine learning and deep learning were no longer abstract technological concepts but applied frameworks.

‘Technology is everything that doesn't work yet,’ as W. Daniel Hillis observed, and this applies to the term ‘artificial intelligence’ as well. To be more precise, I argue that when the term ‘artificial intelligence’ is used in public discourse, it often refers not to the technology itself but rather to images of its future.

In these conversations with media companies, the focus was always on how generative AI might alter jobs, products, processes, and business models. Rarely did these discussions delve into the specifics of how large language models currently work or how understanding transformer models can lead to a better grasp of how ChatGPT ‘understands’ text. Using the term ‘AI’ creates a speculative space broad enough to envision wide-ranging futures, in contrast to the more precise terms mentioned above, which would foster a more concrete discourse about current capabilities.

It will be intriguing to observe how and when ‘AI’ will be supplanted by more specific terms in public discourse and what new meanings ‘AI’ will then acquire.

3.

The Performativity of Dystopian Futures

Dystopian visions are often leveraged as cautionary tales, alerting us to potentially perilous paths. Their fundamental purpose is to stir action, to steer us away from unwelcome futures. Yet, a critical juncture exists in public discourse where these dystopian narratives shift. Overused, they morph from warnings into something more insidious: an anticipated reality. This occurs when the sentiment, “We must prevent this at all costs, yet prepare for its unlikely realisation,” is repeated excessively. By doing so, we inadvertently embed these outcomes into our collective consciousness. The result is a performativity of future visions that edge towards self-fulfilment.

This concept is particularly relevant when examining the discourse surrounding Donald Trump's potential re-election. The narrative is not just about current poll numbers but has evolved into using the prospect of his return as a lever to instigate action in diverse contexts, from Ukraine to climate conferences. While prudence in preparing for various futures is essential, it's equally critical to recognise the moment such preparations subtly acknowledge the undesired scenario as a likely eventuality. When we cross this threshold, the battle is lost even before it begins. We must remain vigilant, ensuring that our efforts to avert a future do not inadvertently solidify its place in our expectations.

Quotes of the Week

“The future, after all, is written in the present, and there’s a lot of powerful evidence at the forefront of media showing that communities are helping us find the path forward.”


AX Mina in We navigate deep uncertainty with community (with a quote from yours truly). Also, check out her Republica talk.
“Far too often, we blame women for turning to alternative medicine, painting them as credulous and even dangerous. But the blame does not lie with the women – it lies with the gender data gap. Thanks to hundreds of years of treating the male body as the default in medicine, we simply do not know enough about how disease manifests in the female body.”


Caroline Criado Perez, as quoted in ‘Everything you’ve been told is a lie!’ Inside the wellness-to-fascism pipeline | Health & wellbeing

Worthy of your Time

I sat down with my friend and collaborator Jonas to reflect on our recent initiative to establish a community for critical futures in Germany. The podcast episode (in German) is here.

I also had a conversation with Philipp from the Gesundheitsmarkt-Podcast, talking about aging, lifespan vs. healthspan, and the role of user experience in healthcare. That podcast episode (also in German) is here.


Thank you for your attention.

What has sparked your interest? What new question came to mind? What do you want to know more about? Hit reply or use the comment section to let me know.

Have a good one,
Johannes Kleske