Remember March 2023? The Month AI Panic Peaked.
Headlines screamed about existential risk. Tech CEOs solemnly warned of “human extinction.” Over 30,000 people, including Elon Musk, Steve Wozniak, and Yuval Noah Harari, signed an open letter calling for a six-month pause on training AI systems “more powerful than GPT-4.”
Fast-forward to today, and that chorus of alarm has faded to a murmur.
What happened? Did we solve AI’s dangers, get bored of talking about them, or just decide that existential risk was bad for business?
The Rise (and Fall) of the AI Naysayers
For a brief moment, it felt like society had finally woken up to AI’s risks. The Future of Life Institute’s open letter wasn’t the only warning shot:
Geoffrey Hinton, the “Godfather of AI,” quit Google to warn of existential threats.
Researchers like Yoshua Bengio and Stuart Russell called for strict regulation, comparing AI to pandemics or nuclear weapons.
Even Sam Altman (OpenAI’s CEO) testified before Congress, urging lawmakers to rein in the tech.
But today? The loudest critics have either gone quiet or been drowned out by AI hype.
I am not an AI doomer myself, all I’m saying is that it feels a little suspicious that these voices no longer exist. Every prominent CEO or VC “influencer” now almost laughs when the suggestion of a misaligned model is posed.
Why the Silence?
1. The Bandwagon Effect
It’s hard to stay skeptical when there’s money to be made.
Many of AI’s loudest critics are now building it, investing in it, or writing Substacks about its transformative potential. Even Elon Musk—who signed the 2023 letter—launched his own AI startup, xAI.
When the skeptics join the gold rush, who’s left to sound the alarm?
2. The Normalization of Risk
We’ve seen this before.
Social media’s dangers were downplayed—until they weren’t.
Climate change warnings were ignored—until disasters struck.
AI risks feel abstract and distant—until they won’t be.
As Emily M. Bender (co-author of Stochastic Parrots) put it: “We’re being frog-boiled by corporate AI propaganda.”
What Now?
The AI debate isn’t just a choice between “doomers” and “accelerationists.”
We need more than blind optimism or fatalism—we need critical thinking, accountability, and action.
Who benefits from silencing skeptics?
What happens when AI advances faster than our ability to control it?
How do we ensure AI serves humanity instead of the other way around?
Staying skeptical isn’t anti-progress. It’s how we shape the future before it shapes us.
It seems like everyone who called for a pause on training systems more powerful than GPT-4 got embarassed after they lost the fight to pause anything, and then nothing bad happened.