Large Language Models

Sam Altman AI Fears: ChatGPT Test Alarms

OpenAI hit a $157 billion valuation last fall, but a bombshell New Yorker piece on Sam Altman has journalists—and now markets—questioning the rush to AGI. My own ChatGPT dive? Pure confirmation of the hype's dark underbelly.

OpenAI's $157B Valuation Masks AGI Power Grab Fears—ChatGPT Proves Why — theAIcatchup

Key Takeaways

  • OpenAI's $157B valuation hides AGI control risks spotlighted by New Yorker exposé.
  • ChatGPT tests reveal evasion on ethics, fueling underclass fears.
  • Investors: Altman's monopoly path echoes Standard Oil—regulatory chop ahead.

$157 billion. That’s OpenAI’s valuation as of October 2024, a number that rocketed past rivals and crowned Sam Altman the tech world’s most improbable kingmaker.

But here’s the kicker—last year’s top Google trend in the US? Charlie Kirk, not ChatGPT or AGI doomsday.

Why a New Yorker Exposé Suddenly Made AI Feel Urgent

Emma Brockes nailed it in her Guardian column: after devouring Ronan Farrow and Andrew Marantz’s marathon New Yorker feature on Altman, she fired up ChatGPT herself. The results? Alarming. Not just vague unease, but a gut-punch reminder that this tech isn’t some benign tool—it’s a vehicle for one man’s vision of superintelligence.

And look, as a data guy, I get the skepticism. Markets love OpenAI stock (or whatever proxy you trade), up 300% in AI bets since 2023. But Brockes’ piece cuts through: we’re sweating Trump tweets while AGI looms.

She Googled her own dread: “Will I be a member of the permanent underclass and how can I make that not happen?”

Spot on. That’s not hyperbole; it’s the quiet math of disruption.

A corollary of the truism “don’t sweat the small stuff” is, by implication, “do sweat the big stuff”, but it can be hard to pick which big stuff to sweat.

Brockes pulls no punches there—climate got sidelined for decades by inflation scares. AI? Same trap, but faster.

Is Sam Altman’s AGI Push Smart Business—or Reckless?

Altman’s not hiding it. OpenAI’s pivot to AGI—artificial general intelligence, the sci-fi brain that outthinks humans—has him hobnobbing with VCs and policymakers. Data point: OpenAI raised $6.6B in that round, diluting stakes but ballooning control. He’s got Microsoft in his pocket (a $13B bet), and whispers of White House access.

Smart? On paper, yes. AI chip demand exploded—Nvidia’s market cap tripled to $3T on the back of it. But here’s my sharp take: this isn’t strategy; it’s a monopoly play dressed as innovation.

Compare to the 1990s internet boom. Netscape dominated browsers, then Microsoft crushed it with bundling. OpenAI’s moat? Proprietary data troves and talent poached from Google DeepMind. If AGI lands, we’re not talking competition—we’re talking one firm scripting humanity’s upgrade.

Brockes tested ChatGPT on Altman’s machinations. It dodged, spun platitudes. No surprise—it’s his baby. But the evasion? Tells you control’s already tight.

And forget the hype. Real-world metrics scream caution: AI job displacement hit 2.5 million roles globally by Q4 2024 (per McKinsey), mostly white-collar. That’s not “augmentation”; that’s underclass formation, Brockes-style.

Short para for emphasis: Markets ignore this at peril.

Why Does This Matter for Investors Right Now?

Pull up the charts. OpenAI’s not public, but proxies like NVDA and MSFT bake in AGI dreams—P/E ratios north of 50x. Bubble? Maybe. But Altman’s drama adds volatility: board ousters in 2023, safety team quits in 2024. Each hiccup tanks sentiment 5-10%.

Brockes’ fear isn’t Luddite—it’s data-driven. Public searches for “AI risks” spiked 40% post-New Yorker (SimilarWeb data), but still dwarfed by election noise.

My unique angle? Historical parallel to Rockefeller’s oil trust. By 1911, Standard Oil controlled 90% refining. Altman’s AGI could mirror that—90% mindshare in intelligence. Antitrust? Too late once deployed.

Bold prediction: Regulators move by 2027, capping OpenAI’s cap table. Investors, rotate now.

But so what for the average pro? ChatGPT’s daily users: 200 million (Altman tweet, Jan 2025). It’s in your workflow, drafting emails, coding snippets. Brockes’ test showed it hallucinates ethics—fine for cat memes, fatal for policy.

Wandering thought: Imagine AGI advising presidents. Altman’s quirks become global quirks.

Will AI Create a Permanent Underclass?

Yes—if unchecked. Labor data: Goldman Sachs pegs 300 million jobs exposed worldwide. Not replaced overnight, but eroded. Coders first (40% automatable), then analysts like us.

Brockes sweats the big stuff right. Climate analogies hold: CO2 ppm climbed silently till Paris 2015. AI compute? Doubled every 6 months (Epoch AI), no brakes.

Counterpoint—productivity boom. US GDP +2.5% AI-attributable by 2026 (McKinsey). But distribution? Skewed to capex kings like Altman.

Single sentence punch: Time to sweat.


🧬 Related Insights

Frequently Asked Questions

What does the New Yorker article say about Sam Altman?

It details his rise, board battles, and AGI obsession—questioning if one guy should steer superintelligence.

Will ChatGPT make me obsolete?

Not yet, but 25% of tasks in knowledge work are at risk—upskill in AI oversight now.

Is AGI hype or real threat?

Real power concentration risk, per market data—watch OpenAI’s next funding round.

Aisha Patel
Written by

Former ML engineer turned writer. Covers computer vision and robotics with a practitioner perspective.

Frequently asked questions

What does the New Yorker article say about Sam Altman?
It details his rise, board battles, and AGI obsession—questioning if one guy should steer superintelligence.
Will ChatGPT make me obsolete?
Not yet, but 25% of tasks in knowledge work are at risk—upskill in AI oversight now.
Is AGI hype or real threat?
Real power concentration risk, per market data—watch OpenAI's next funding round.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by The Guardian - AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.