Beyond the AI Hype
The limits of scaling, the illusion of superintelligence, and what the U.S. can learn from China’s “AI+” approach.
Cross-posting something I wrote this week for ASPI’s Daily Digest newsletter — on AI hype, the slowing pace of breakthroughs, and why Beijing might be playing the longer game. My apologies to those already subscribed to the Daily Cyber & Tech Digest for the cross-post.
When AI hype meets hard limits
This month brought something that had started to feel a little passé — an open letter warning about the dangers of advanced AI.
If you’re like me, the media coverage focusing on Prince Harry and Meghan Markle as signatories nearly made you scroll past entirely.
But it’s worth a closer look. A quick glance at the top of the official site shows this isn’t just celebrity virtue-signalling, it’s a warning from the people who built the field.
The first name listed is Geoffrey Hinton, a Nobel laureate and Turing Award winner, who is often called the godfather of AI. As the organisers note, he also happens to be, they claim, the second-most cited scientist in the world.
And what about the most cited scientist in the world? He’s right there too, just below Hinton. In fact, among the familiar celebrity names are roughly 800 Nobel laureates and leading researchers.
More revealing, though, are the non-signatories whose warnings about superintelligence are quoted on the petition website.
OpenAI’s Sam Altman, we are reminded, has called superhuman AI “the greatest threat to humanity.” Anthropic’s Dario Amodei puts the odds of catastrophe at 25%. Elon Musk’s estimate is only slightly better at a 20% chance of annihilation. Microsoft’s Mustafa Suleyman says if we can’t prove it’s safe, we shouldn’t build it.
And yet, all of them are building it, at full speed.
As AI theorist Eliezer Yudkowsky, noted this week, many of the people who understand the genuine risks associated with AI are currently employed by AI companies. In AI terms, their massive paychecks may have made them misaligned with the interests of the rest of humanity.
Another possibility, of course, is that the NDAs these AI insiders have signed are keeping them quiet about an even more inconvenient truth — that “superintelligence” isn’t nearly as close as their bosses would have us believe.
Before the launch of GPT-5, Sam Altman likened it to the Trinity atomic bomb test, with himself, one suspects, in the Oppenheimer role. But GPT-5 has been less an atomic bomb and more a damp squib.
As The New Yorker’s Cal Newport recently put it, the “scaling laws” that once defined AI progress have hit a wall. If GPT-3 was a sedan and GPT-4 a sports car, GPT-5 is just a slightly more souped-up sports car.
Even if the technology has slowed, the money hasn’t. Bain & Company projects that keeping pace with AI’s compute and energy demands will require about US $2 trillion (A$3.1 trillion) a year in new revenue by 2030, a scale-up it warns may be financially unsustainable. Just this week, Altman said OpenAI alone plans to spend US $1.4 trillion (A$2.2 trillion) on infrastructure. But to do that, the company will have to make far more than it currently does. OpenAI’s revenue, about US $13 billion (A$20 billion) this year, underscores how far there is to go.
Meanwhile, in Beijing, the Party-state is hedging its bets. As Matthew Johnson pointed out in Jamestown’s China Brief earlier this month, the CCP is pursuing superintelligence while simultaneously embedding AI deep within the existing economy — using it to boost productivity, modernise industry, and tighten social control. Frontier breakthroughs and applied deployment aren’t competing visions in China’s model, they’re two sides of the same state-directed system.
If the big American AI firms have bet on the wrong strategy, the fallout won’t just be financial. As WIRED put it this week, AI may not just be another tech bubble, it could be “the ultimate bubble,” one that “hits all the right notes” of past manias, from radio to aviation to the dot-com era.
Beijing’s two-track approach, pairing moonshot ambition with hard-nosed industrial policy, might prove the smarter play.
My must-reads
Foreign Affairs
Michael C. Horowitz and Lauren A. Kahn, two leading U.S. defence and technology policy experts, argue that America’s fixation on AGI is a costly distraction from the real contest: rapidly adopting and integrating today’s practical AI systems.
The New Yorker
Cal Newport, a computer scientist and writer on technology and productivity, argues that GPT-5’s underwhelming debut suggests the era of rapid AI breakthroughs may be over, and that it’s time to replace hype with realism.
Pluralistic
Cory Doctorow, novelist and long-time tech critic, and the guy who coined “enshittification” to describe platform decay, delivers a sharp takedown of the AI economy, arguing that the entire sector runs on hype, debt, and circular accounting. It’s an especially clear and persuasive version of the “bear case” against AI economics.


