AI Hardware

Broadcom Supplies Anthropic 3.5GW TPUs from 2027

Imagine 3.5 gigawatts of compute — enough to power a small city — funneled straight to Anthropic's Claude models from 2027. Broadcom's latest filing reveals the scale of this three-way bet with Google.

Illustration of massive data center racks with Google TPUs supplied by Broadcom for Anthropic

Key Takeaways

  • Broadcom secures 3.5 GW TPU supply for Anthropic from 2027, atop 1 GW in 2026.
  • Anthropic's revenue run rate surges to $30B, signaling massive enterprise demand for Claude.
  • This cements Broadcom as key AI ASIC implementer for top labs like Anthropic and OpenAI.

$30 billion. That’s Anthropic’s annualized revenue run rate now, up from $9 billion just months ago — a tripling that screams demand for Claude, even as the AI hype machine churns on.

But here’s the real jolt: Broadcom’s securities filing drops a bombshell. They’re supplying Anthropic with 3.5 gigawatts of Google TPU capacity kicking off in 2027. Add that to the 1 GW already slated for 2026, and you’ve got a multi-gigawatt beast tied to Anthropic’s performance.

Look. This isn’t some side hustle. It’s a three-way dance — Google designs the TPUs, Broadcom turns them into silicon reality with networking guts and packaging wizardry, TSMC fabs it all. And Anthropic? They’re the hungry customer, locking in U.S.-heavy infrastructure to hit that $50 billion American AI buildout pledge from late 2025.

Why Is Broadcom the Sudden AI Hardware Shadow Broker?

Broadcom’s playing both sides — or really, the implementation layer for the big three U.S. frontier labs. Think about it: they’re already deep in a $10 billion, 10 GW custom silicon romp with OpenAI. Now this. Nvidia’s still king of GPUs, sure, but Broadcom’s ASICs are creeping in, converting architectures into manufacturable chips while everyone scrambles for capacity.

It’s reminiscent of the 1990s, when Intel’s x86 lock-in starved competitors — except flip it. Broadcom’s not owning the architecture; they’re the indispensable middleman, supplying SerDes, power management, the works. Google owns TPU IP, but without Broadcom’s execution? Crickets.

And the numbers analysts are floating? Mizuho’s Vijay Rakesh pegs Broadcom at $21 billion AI revenue from Anthropic in 2026 alone, ballooning to $42 billion in 2027. No dollar figures in the filing, but the scale’s there — contingent on Anthropic keeping the cash flowing.

“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base,” Krishna Rao, Anthropic’s chief financial officer, said in the blog post.

Groundbreaking? Maybe. But Rao’s words mask the hedge: this capacity hinges on commercial success. No growth, no gigawatts.

Anthropic’s not ditching partners, either. AWS stays primary with Project Rainier — that Trainium 2 supercluster in Indiana. Google TPUs slot alongside, not supplant. OpenAI’s grabbing 6 GW of AMD GPUs too, first gigawatt soonish. Nvidia? Still everywhere via clouds.

Does Anthropic’s $30B Run Rate Hold Up Under Scrutiny?

Over 1,000 businesses dropping $1 million-plus yearly — double from February. That’s enterprise adoption, not just hype. Claude’s carving a safety-first niche against GPT’s chaos, and it shows in the ledger.

But skepticism’s warranted. Run rates are snapshots, not guarantees. $9B end-2025 to $30B now? Explosive, yeah — yet tied to volatile API pricing, model upgrades, maybe some AWS credits inflating the top line (they’re mum on margins). And with 3.5 GW locked years out, they’re betting big on sustained demand.

Here’s my unique angle, absent from the filings: this reeks of the dot-com infrastructure gold rush, circa 1999. Back then, Cisco and optics firms gorged on fiber overbuilds, only for demand to plateau. Broadcom-Anthropic’s scaling for exponential customer growth, but what if inference efficiency flips the script? Smaller models, edge compute — suddenly those gigawatts look oversized.

How Does This Reshape the AI Chip Wars?

Nvidia’s grip? Shaking, but not snapping. GPUs dominate training still, and hyperscalers love the CUDA moat. TPUs shine on inference, cost-per-token — Google’s stack owns that. Broadcom’s win? They’re the ASIC whisperer, now for OpenAI and Anthropic, leaving Meta to its MTIA silos.

By 2031, Broadcom’s assured supply for Google’s next-gen racks. That’s architectural entrenchment — high-speed interconnects become the new battleground as racks scale to exaflop dreams.

Critique the spin: Anthropic calls it “disciplined scaling.” PR gloss. Truth? They’re all-in on U.S. soil amid export jitters, diversifying from AWS reliance without burning bridges. Smart hedging.

Deeper why: power walls. 3.5 GW isn’t just compute; it’s a nuclear-plant-level bet on grid upgrades, cooling breakthroughs. Data centers guzzle — this deal spotlights the hidden crisis in AI’s underbelly.

And Broadcom? Stock’s up, AI revenue exploding. They’re not fabricating like TSMC, but implementing like no one else. Prediction: by 2028, they’ll eclipse pure-play AI chip revenue of anyone sans Nvidia.

Short para. Boom.

Then sprawl: Partnerships like this expose the fragility — one geopolitical twitch, TSMC yields dip, and the whole stack wobbles. Anthropic’s U.S. focus? Shield against that, but at premium cost. Google’s TPUs cheaper long-term? Jury’s out.

Why Should Developers Care About TPU Shifts?

Forget GPUs if you’re optimizing inference. TPUs crush on structured workloads — Anthropic’s Claude fleet proves it. But lock-in risk: Google’s software stack means porting pains from PyTorch.

Businesses? Those 1,000+ million-dollar spenders signal safe, enterprise-grade AI. Claude’s constitutional AI pitch lands.


🧬 Related Insights

Frequently Asked Questions

What is the Broadcom Anthropic TPU capacity deal?

Broadcom will supply 3.5 GW of Google TPU capacity to Anthropic from 2027, plus components for Google’s racks through 2031 — all performance-contingent.

How much revenue does Anthropic make now?

Annualized run rate topped $30 billion, with over 1,000 customers spending $1M+ yearly.

Will TPUs replace Nvidia for AI labs?

Not fully — they complement GPUs, excelling in inference while Nvidia dominates training.

Elena Vasquez
Written by

Senior editor and generalist covering the biggest stories with a sharp, skeptical eye.

Frequently asked questions

What is the Broadcom Anthropic TPU capacity deal?
Broadcom will supply 3.5 GW of Google TPU capacity to Anthropic from 2027, plus components for Google's racks through 2031 — all performance-contingent.
How much revenue does Anthropic make now?
Annualized run rate topped $30 billion, with over 1,000 customers spending $1M+ yearly.
Will TPUs replace Nvidia for AI labs?
Not fully — they complement GPUs, excelling in inference while Nvidia dominates training.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Tom's Hardware - AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.