If you’re anywhere near the AI, tech, or business worlds, you’ve probably heard about “The 2028 Global Intelligence Crisis”, a “scenario” about how AI can cause a major economic crisis in the next two years. People said many things about it but wanted to make a note about it on my blog for a couple of reasons.

As a story, it’s a decent read for sure. It does its best to be convincing and provocative, and uses every tool it has to be taken seriously. Like a decent hard sci-fi story. And here lies the first reason why I wanted to write about it: it’s hard for me to call this a proper scenario work. Even though the author claims otherwise, it’s much more of a prediction than a scenario.

If your job is foresight and you decided to use scenarios as your tool, this isn’t the way to do it. Johannes Kleske explains why:

“The scenario’s weakness is the question it poses. Citrini asks, “Which existing jobs can AI replace?” And then it extrapolates linearly. More AI capability, more jobs replaced, more spending lost, downward spiral. Sangeet Paul Choudary, in his book Reshuffle, calls this the “intelligence distraction”: measuring AI’s impact by mapping it onto existing human tasks.

There’s a formulation I keep coming back to: we imagine the future as today, only more extreme. Citrini takes the current economic system and turns up the AI dial: faster automation, cheaper inference, and more displacement. It never asks how the system itself changes. And that is where the scenario stops short.”

To me, this is a reckless way to do scenarios, because you’re not actually looking at all the potential futures. You’re just taking one potential story and running with it. If you do that, you’ll end up with big gaps and lots of problems in your scenario — which in turn ends up distorting how you’re seeing today and the future.

This is why if you’re doing scenario work — or any type of foresight work — you need to put more effort and care into it. Look at other potential futures, come up with multiple scenarios, and explain why you chose those and not others. Show us what made you think these outcomes are more likely and what else could happen. Without those, what you have is not a scenario but a long-form prediction.


This is where the other reason I wanted to write this post comes in.

When you release a scenario like this out into the world, you can never be sure how it’s going to be used by others. And we saw how it became a really useful tool for some of the people with stories like this:

A street sign for Wall Street, with an American flag blurred in the background, indicating a financial news article about the US market.
Financial Times

We all know the never-ending discussions about what AI will do, if it’s a bubble or not, and more. The problem is that when you’re in an economy that keeps going up and lots of uncertainty surrounding it, you’ll also have a lot of people looking for an excuse to take their profits and leave the party. FT Unhedged podcast’s episode on the report explains why Citrini’s scenario was that excuse for many people.

Of course, if Citrini didn’t release the report when they did, last weekend would be the excuse for those people. Thanks to our polycrisis times, new reasons to freak out get delivered to your door weekly. This one just made them look more serious because it’s a scenario.

Only if you look a little closer, it’s not really a good one even on the basic economics. Unhedged episode above explains it, but if you’re not convinced, there’s also this FT Alphaville blog post talking about some of the main issues of the report on the economics perspective:

“The scenario has no monetary or fiscal policy response, while in reality, if AI does drive unemployment to anything like the 10 per cent level in the scenario — in particular if politically influential white-collar workers are losing their jobs in droves — we would expect a vigorous policy response on both tracks.”

But most people (well, most people online) didn’t care because it was a really convincing and well-timed story. It got a lot of attention, helped some people to leave the table with their winnings, and caused a good amount of publicity for the author.

I’ve talked about why stories and myths are dangerous tools, but when it comes to the topic at hand, there is more to it:

“Despite protestation to the contrary, I assume that the Citrini scenario achieved its basic aim—though whether its authors intended the exact extent and direction of its effects is rather harder to judge. We are living through a contestation of narratives around “AI”, which has become a proxy front for an increasingly heated contestation over “tech” more broadly, which in turn is deeply entangled with a transitional moment in politics and economics (and much else besides).

It’s particularly messy because the media environment in which it is unfolding moves so very fast, and is so easily and cheaply flooded with fictions of a much lower fidelity than that of the Citrini scenario; the “fog of war” is on the other side of the glass screen of your phone, and the more you swipe, the thicker it gets.”

And that’s why you should be careful about the stories you tell.


Bonus: Someone from China also took a swing at predicting 2028 based on Citrini’s report. It’s worth a read, even just to see how different other countries are compared to the US and why those predictions don’t play out the same way.

“A much larger cohort of “pseudo white-collar” workers—those in government organs and state-owned enterprises—were not easily shaken by algorithms. In leader-driven systems, AI penetration was not particularly welcome. A lot of information, instructions, and paperwork still moved on paper, and tight confidentiality disciplines blocked AI from meaningfully reading even what had been digitised.”

Comments

One response to “Notes On That 2028 Prediction”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Weird and Deadly Interesting

Subscribe now to keep reading and get access to the full archive.

Continue reading