AI Ethics

OpenAI Insiders Distrust Sam Altman

OpenAI's fresh policy push for safe superintelligence lands awkwardly next to The New Yorker's takedown of CEO Sam Altman. Insiders say he's more power-hungry manipulator than humanity's guardian.

Sam Altman at podium with OpenAI logo amid skeptical audience

Key Takeaways

  • OpenAI insiders label Sam Altman a deceptive people-pleaser unfit for superintelligence stewardship.
  • New Yorker exposé contrasts sharply with OpenAI's same-day safety policy push, eroding CEO credibility.
  • Market implications: talent wars intensify, rivals like Anthropic gain edge amid trust crisis.

Everyone figured OpenAI would keep preaching the gospel of safe superintelligence — you know, that utopian blend of god-like AI and ironclad human oversight. Sam Altman himself has been the high priest, testifying before Congress, wooing investors with visions of abundance. But then, bam: The New Yorker unleashes a 10,000-word gut punch from insiders who paint him as untrustworthy at the helm of it all.

This flips the script hard. Markets shrugged off OpenAI’s latest policy paper — vague calls for ‘people-first’ regulations amid AI outsmarting humans. Yet the real jolt? Over 100 sources, internal memos, 12 Altman interviews, all screaming one thing: don’t bet your future on this guy’s promises.

Look, OpenAI dropped those recommendations Tuesday, vowing transparency on doomsday risks like rogue AIs or weaponized models toppling democracies. Noble stuff. ‘People will be harmed’ without checks, they warn, pledging a ‘higher quality of life for all.’ Investors ate it up; stock chatter stayed bullish.

Why the Sudden Sam Altman Backlash?

But here’s the disconnect — or disorienting whiplash, really. The New Yorker piece, penned after exhaustive digging, shreds that halo. Insiders depict Altman as a chameleon, morphing to please whoever’s in the room while chasing unchecked power.

One board member — anonymous, naturally — nails it:

He has “two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

Chilling. And not isolated. Ex-employees recount boardroom blindsides, like Altman’s 2023 ouster and swift return, fueled by his knack for rallying external saviors (Microsoft, anyone?).

Data backs the skepticism. OpenAI’s valuation soared to $157 billion post-Altman reinstatement, but internal churn? Sky-high. Key safety researchers bolted, citing clashing priorities. Recall Ilya Sutskever’s exit — the safety co-founder who helped boot Altman, only to see the CEO claw back control.

Can Investors Stomach Altman’s Power Plays?

Markets love winners, but this reeks of Theranos 2.0 vibes (my unique take: just like Elizabeth Holmes charmed VCs with blood-test fairy tales, Altman’s superintelligence sermons mask a board that’s more rubber-stamp than watchdog). OpenAI’s structure — capped-profit, mission-locked — was meant to prioritize humanity over hustles. Yet Altman’s moves scream empire-building.

Consider the timeline. Post-ouster, Altman jetted to DC, schmoozed lawmakers on AI safety. Back at OpenAI, safety teams shrank. Now, with GPT-5 whispers and Stargate supercomputer dreams ($100B Microsoft bet), trust fractures widen.

And the policy paper? Perfect PR foil. Released same day as the New Yorker hit, it’s like Altman yelling ‘Look over here!’ while dodging the mirror. Insiders say he tailors truths — investors get growth fairy tales, regulators get risk sermons, board gets loyalty tests.

Short version: if superintelligence arrives (odds? Betting markets peg 20% by 2030), who’s steering? A proven pleaser or a principled guardrail?

OpenAI’s rivals smell blood. Anthropic’s Dario Amodei — Altman’s ex-colleague — runs a safety-first shop, valuation at $18B but growing steady. xAI’s Elon Musk, post his own OpenAI lawsuit, mocks Altman’s flip-flops. Google DeepMind hunkers down, regulatory-compliant.

Market dynamics shift fast here. OpenAI commands 70% LLM market share (ChatGPT daily users: 200M+), but scandals dent moats. Enterprise deals? Microsoft locked in, but partners like Apple (delayed Siri integration rumors) might pause.

Bold prediction: expect a 2025 board revolt redux. Altman’s people-pleasing won’t suffice when AGI tests hit. If history rhymes — think Uber’s Kalanick implosion amid ethics blowback — OpenAI forks toward safety or implodes.

What OpenAI’s Policy Paper Really Means (Or Doesn’t)

Back to that document. It urges global standards: compute governance, treaties on military AI, even ‘AI impact assessments’ like environmental regs. Sounds proactive. But toothless without enforcement — and who’s volunteering Altman as referee?

Insiders argue his track record kills credibility. Early OpenAI memos (reviewed by New Yorker) show Altman pushing consumer apps over safety research. Now, with superintelligence on horizon (Altman: ‘inevitable in years, not decades’), the irony bites.

One ex-executive: Altman ‘tells funders what they want — billions flow.’ Spot on. Venture funding hit $50B+ in AI last year; OpenAI slurps the lion’s share.

But wait — does this tank OpenAI? Nah, not yet. Revenue projections: $3.7B this year, up 1200% YoY. Users hooked. Still, trust erosion accelerates talent wars. Top AI PhDs flock to safer bets.

And governments? EU’s AI Act clamps high-risk models; US lags, but Biden’s EO eyes safeguards. Altman’s charm offensive — 20+ Capitol Hill visits — might backfire if New Yorker sticks.

The Bigger AI Governance Mess

Zoom out. Superintelligence isn’t sci-fi; it’s market math. Compute costs plummet 10x yearly; models scale predictably (Chinchilla laws hold). OpenAI’s o1-preview already reasons like grad students.

Yet leadership voids persist. Altman’s duality — likable visionary, alleged deceiver — mirrors AI’s split soul: boundless upside, existential downside.

Critique the spin: OpenAI’s paper is less blueprint, more brochure. ‘Clear-eyed’ risks? Sure. But without independent oversight, it’s CEO fiat.


🧬 Related Insights

Frequently Asked Questions

What did The New Yorker say about Sam Altman?

They interviewed 100+ insiders, reviewed memos, and portrayed him as a power-seeking people-pleaser with little remorse for bending truths.

Can OpenAI deliver on superintelligence safety promises?

Doubtful under current leadership; internal distrust and safety team exits signal priorities skewed toward growth over guardrails.

Will this hurt OpenAI’s valuation?

Short-term no — revenue booms. Long-term? High risk if board acts or talent flees.

Elena Vasquez
Written by

Senior editor and generalist covering the biggest stories with a sharp, skeptical eye.

Frequently asked questions

What did The New Yorker say about Sam Altman?
They interviewed 100+ insiders, reviewed memos, and portrayed him as a power-seeking people-pleaser with little remorse for bending truths.
Can OpenAI deliver on superintelligence safety promises?
Doubtful under current leadership; internal distrust and safety team exits signal priorities skewed toward growth over guardrails.
Will this hurt OpenAI's valuation?
Short-term no — revenue booms. Long-term? High risk if board acts or talent flees.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Ars Technica - AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.