Everyone figured OpenAI was the good guy in AI’s wild west — open source vibes, chatty tools for the masses, Sam Altman as the earnest innovator. Wrong. Ronan Farrow’s New Yorker takedown shows a company lurching toward military might, led by a CEO its board tried to axe, now embedding ChatGPT’s brains into classified ops. This shifts everything: AI isn’t just apps anymore; it’s the new arsenal, and control’s in shaky hands.
Farrow nails it early: the board fired Altman in 2023, doubting his trustworthiness. Picture that — your own directors think you’re sketchy.
“OpenAI’s products now reach into everything, from your smartphone to defence contracts to law enforcement.”
That’s from the piece sparking this firestorm, but Farrow digs deeper into the “war room” Altman rallied: crisis PR pros, heavy-hitter investors. Five days later, he’s back, thanks to Microsoft muscle and 700 employees threatening mutiny. Investor cash talks louder than ethics, apparently.
What the Hell Happened with Anthropic?
Anthropic, OpenAI’s squeaky-clean rival, balked at Uncle Sam’s overtures. Mass surveillance? Autonomous killers? No thanks, they said — deal exploded. Enter OpenAI, swooping in post-Trump’s Anthropic snub. Opportunistic? Altman called their first Pentagon pact “opportunistic and sloppy,” then slapped on “guardrails” PR spin. Same red lines Anthropic fled, mind you. Jake Laperruque from Tech Policy spots the bluff: indistinguishable boundaries.
Here’s the thing — OpenAI swears by democracy. “Deep collaboration,” they claim. But peek at co-founder Greg Brockman: $25 million Trump donor in January, plus an AI SuperPAC hauling $125 million for national regs over state chaos. Trump signs the EO limiting states, boom — federal playground for Big AI. Coincidences? Sure, if you buy Caesar’s assassins were all honorable.
And power’s exploding. $852 billion valuation despite $14 billion projected 2026 losses. Datacenters guzzling juice worldwide, white-collar jobs vaporizing. OpenAI staffers whisper it’s a “threat to humanity.”
Look. This reeks of Manhattan Project echoes — brilliant minds, unchecked power, self-regulation as the punchline. Back then, Oppenheimer’s crew built the bomb thinking they’d save the world; fallout was governments corralling the genie. OpenAI? They’re sprinting ahead, board scars fresh, political bets placed. Prediction: without hard external rails — not their flimsy ones — we’ll see AI-fueled strikes like Palantir’s Maven in Iran, girls’ school rubble as collateral. History screams: let CEOs self-regulate doomsday tech? Disaster.
Does Sam Altman’s Trust Deficit Even Matter?
Short answer: hell yes. Board ousted him for cause, yet here he is, Pentagon partner. Farrow’s sources paint a operator who turns crises into crowns. Reinstatement wasn’t redemption; it was capitalism’s checkmate. Now, with defense dollars flowing, ethical activists like Rutger Bregman push #QuitGPT boycotts. Will it stick? Doubtful — convenience trumps conscience.
But dig into the architecture. OpenAI’s shift isn’t bug; it’s feature. From nonprofit roots to Microsoft-backed behemoth, they’ve architected escape velocity from oversight. Military deal? Not slippage — strategic pivot. Why? Scale demands defense cash; ethics are speed bumps.
Staff leaks on existential risks clash with Brockman’s PAC plays. National regs mean less friction for deployment, more for rivals. Trump’s “minimally burdensome” standard? Green light for AI arms race, states neutered.
Worse: classified ops mean zero transparency. What’s ChatGPT advising drone swarms? Border patrols? We won’t know until the whistleblower drops.
Why Power Concentration Spells Doom for AI
Power vacuums fill fast. OpenAI’s hunger — compute, cash, contracts — mirrors Big Oil’s playbook. Self-regulated? They cut corners on safety for speed, just like Exxon on climate.
The rubble in Minab, from alleged AI-guided strikes, isn’t abstract. It’s the why of guardrails: humans glitch — good days, bad days, demagogue detours. Tech startups to nation-states need ironclad channels: social compacts, legal chokeholds, economic incentives flipping harm to help.
Altman’s charm offensive — TED talks, effective altruism cosplay — masks the board’s original verdict. We’ve learned: enterprises don’t self-regulate; they self-maximize.
So, Trump’s crew picks OpenAI over principled holdouts. Democracy’s “deep collaboration”? Donor checks clearing.
This changes the game. Expected: AI for good. Reality: trusted? No. Powerful? Absolutely. Time to demand real controls, before the next school becomes code’s casualty.
🧬 Related Insights
- Read more: OpenAI Japan’s Teen Safety Blueprint: Guardrails for AI’s Young Explorers
- Read more: Codex Security Ditches SAST Reports—Here’s the Cynical Take
Frequently Asked Questions
What is OpenAI’s deal with the Pentagon?
OpenAI signed on for classified military AI use after Anthropic backed out over ethics like surveillance and autonomous weapons. They claim extra guardrails, but critics call it same-old.
Did OpenAI’s board really distrust Sam Altman?
Yes — Ronan Farrow reports they fired him in 2023 over trust issues; he fought back with investors and staff pressure, got reinstated days later.
Is OpenAI involved in political donations?
Co-founder Greg Brockman donated $25M to Trump efforts and backs an AI SuperPAC pushing national regs over state ones.