Ban it.
That’s the gut-punch demand from a who’s-who of brains and boldfaces. Yoshua Bengio, Geoffrey Hinton, Steve Wozniak, even Prince Harry—they’re all in. Future of Life Institute drops this bomb: halt superintelligence till it’s provably safe, controllable, and the public nods yes. And get this—a fresh U.S. poll backs ‘em up. Only 5% dig the wild-west AI sprint. Sixty-four percent say no superbrains without scientific thumbs-up.
Here’s the thing. Superintelligence—AI outsmarting humans at everything—isn’t sci-fi. Experts peg it at under a decade away. But control? Nah. We got zilch. Bengio nails it:
“Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks.”
Risks like extinction, folks. Or just your job vanishing in a puff of code.
Why the Sudden Freakout?
Look, AI hype’s been nonstop. ChatGPT dazzles, stock prices soar. But this crew—Nobel winners, admirals, evangelicals, actors—isn’t buying the salesman pitch. Admiral Mike Mullen, who ran the Joint Chiefs? He’s signed on. Stephen Fry calls it “a frontier too far.” Even will.i.am and Joseph Gordon-Levitt. (Yeah, the rappers and Ratatouille guy care about doomsday bots.)
It’s politically rainbow too. Left, right, faith, secular. That’s rare. FLI’s Max Tegmark boasts: “95% of Americans don’t want a race to superintelligence, and experts want to ban it.” Poll says 73% crave ironclad regs. Status quo? Toilet-flush levels of support.
But wait—is this real alarm or savvy PR? FLI’s no newbie; they’ve sounded klaxons before. Critics whisper it’s theater to kneecap rivals. Nah. This feels legit. Public’s spooked. Tech titans? Crickets, mostly.
And here’s my twist nobody’s yelling: this echoes the 1975 Asilomar conference on recombinant DNA. Scientists self-paused gene splicing till safeguards clicked. Result? Biotech boom without Frankenstein plagues. Bold prediction: superintelligence moratorium could spark a “pro-human AI renaissance,” as FLI dreams—tools for cancer cures, climate fixes, sans Skynet.
Do Americans Really Hate Superintelligence?
Hell yes. That poll—YouGov, n=2000—is gold. Folks want AI for meds, energy, smarts. Not god-machines replacing souls. “Nobody developing these AI systems has been asking humanity if this is OK,” gripes FLI’s Anthony Aguirre. We asked. They roared back: unacceptable.
Break it down. Economic wipeout looms—jobs gone. Freedoms? AI overlords erode ‘em. Dignity? Poof. Security risks? Hackable nukes. Extinction? Don’t laugh; Hinton quit Google over it.
Tech bros spin: “Benefits outweigh!” Sure, maybe. But rushing blind? That’s casino gambling with humanity’s chips. Public wants brakes. Regs. Consensus. Not this cowboy code rush.
Short para: Big Tech’s deaf.
Longer riff now—companies like OpenAI, Anthropic chase AGI trophies, billions poured in. Investors drool. But signatories like Richard Branson, Daron Acemoğlu scream foul. It’s misalignment roulette: AI goals skew, boom—unintended apocalypse. Or malicious hackers weaponize it. We’ve no off-switch blueprint.
Can We Actually Stop the Superintelligence Train?
Doubt it. But try we must. Prohibition till two boxes checked: science says safe/controllable; public buys in. Enforceable? Globally? Ha. Treaties flop on nukes sometimes. But pressure builds—EU AI Act bites, U.S. polls sway pols.
Unique jab: this coalition’s star power mocks Silicon Valley’s echo chamber. Harry and Meghan? Wozniak? They’re the PR kryptonite Big Tech ignores at peril. Watch lawmakers cite this come election season.
Wander a sec—remember nuclear non-proliferation? Flawed, but contained bombs. AI’s sneakier, dual-use nightmare. No fallout glow to spot it.
Punchy: Enforce or else.
Deep dive: FLI pushes “secure innovation”—narrow AIs for real probs. Health hacks. Energy wins. Not omnipotent overlords. Smart. Why race to replace humans when tools amplify us?
Corporate spin? “Innovation dies with bans!” Bull. Asilomar proved pauses birth safer leaps. Prediction: voluntary halt now, or forced regs later—messier, costlier.
The Coalition’s Heavy Hitters
Scroll the list: Turing gods Hinton, Russell. Nobels Fihn (nukes), Wilczek (physics). Faith: evangelical Moore, Kim; Vatican’s Benanti. Security: Mullen. Culture: Fry, Levitt, will.i.am, Sussex royals. Diverse? Understatement.
They say: no super till safe + buy-in. Simple. Sane.
What Happens Next?
Momentum. Global echoes—UK, EU sniffing regs. U.S. Congress eyes bills. Poll sways voters. Tech? Musk nods caution sometimes; Altman dodges.
My take: ignore at peril. This ain’t Luddites. It’s prophets.
Fragment. Wake up.
Expanse—rushing superintelligence without brakes is like handing toddlers nukes. Cute till meltdown. Public knows. Experts know. Time for deeds, not demos.
🧬 Related Insights
- Read more: AI Act Governance: Commission’s Endless To-Do List
- Read more: Uncle Sam Has Your Data — And He’s Losing It, Fast
Frequently Asked Questions
What is superintelligence?
AI beating humans at nearly all cognitive tasks—think every job, strategy, invention. Experts say <10 years out.
Should superintelligence be banned?
Not forever, say signers—just till safe, controllable, and public-approved. Poll: 64% Americans agree.
Who signed the superintelligence prohibition call?
Bengio, Hinton, Wozniak, Mullen, Branson, Fry, Prince Harry—plus Nobels, faith leaders, more.