AI Hardware

Intel EMIB-T Rollout Targets AI Amid TSMC Limits

Intel's EMIB-T isn't just tech—it's a lifeline for its foundry amid TSMC bottlenecks. Billions in deals loom as AI chips demand explodes.

Intel's EMIB-T Poised to Crack TSMC's AI Packaging Stranglehold — theAIcatchup

Key Takeaways

  • EMIB-T adds TSVs for HBM4 power delivery, scaling to 12x reticle packages.
  • TSMC CoWoS oversubscribed; Intel eyes billions in deals from overflow demand.
  • Cheaper costs and higher utilization position EMIB-T as AI packaging contender.

EMIB-T flips the bridge.

Intel’s next-gen packaging tech—EMIB-T—lands in fabs this year, just as TSMC’s CoWoS starves for capacity. And here’s the kicker: Intel’s CFO, Dave Zinsner, spilled at Morgan Stanley’s TMT conference that they’re “close to closing some deals that are in the billions per year in terms of revenue” on advanced packaging alone. That’s not pocket change; it’s a potential rescue for Intel Foundry, which bled $10.3 billion last year on $307 million external revenue.

Look, EMIB-T isn’t some minor tweak. It supercharges Intel’s original Embedded Multi-Die Interconnect Bridge (EMIB), the horizontal signal router that’s shuttled chiplets since 2017’s volume ramp. Standard EMIB dodged through-silicon vias (TSVs) to keep bridges dirt-cheap—no interposer costs, no reticle nightmares. But power delivery? Routed around via squishy organic substrate, capping it for mid-tier stuff like Sapphire Rapids. Fine then. Useless now, for HBM4 beasts slurping gigawatts.

EMIB-T punches TSVs straight through that bridge die. Vertical power flows direct. Metal-Insulator-Metal caps squash noise. Copper ground plane shields signals. Boom—HBM4 ready, scaling to monster 120mm x 180mm packages with 38+ bridges, 12+ reticle dies. Dr. Rahul Manepalli, Intel Fellow, dropped specs at last May’s conference: 45-micron bump pitch (shrinking to 25), 0.25 pJ/bit efficiency, UCle-A at 32 Gb/s+ per pin. Supports HBM3 through HBM5.

“EMIB-T supports HBM3, HBM3E, HBM4, and future HBM5 stacks, and scales to a 120mm x 180mm package supporting more than 38 bridges and over 12 reticle-sized dies.”

That’s from Manepalli’s talk—raw authority on why this matters.

How Does EMIB-T Actually Work?

Picture chiplets jammed together, but instead of a giant silicon slab underneath (hello, pricey interposer), tiny bridge dies nestle in substrate pockets. EMIB-T evolves that: TSVs let power and signals zip vertically, dodging substrate resistance. Intel just teased a 2.5D/3D monster—16 compute elements on eight base dies, 24 HBM5 stacks, 10,296 mm² silicon. That’s 12x reticle. TSMC’s CoWoS-L? Hits 5.5x this year, 9.5x by 2027. Intel eyes 8x in 2026, 12x+ by 2028.

Cost? Bernstein pegs EMIB in low hundreds per chip. CoWoS for Rubin-class? $900-$1,000. Plus, 90% bridge wafer use vs. 60% interposer waste. No wonder customers knock—Nvidia gobbles 60%+ of TSMC CoWoS through 2026.

But wait—Intel’s no saint. Their 18A node yields lag till ‘26. Packaging’s the quick win, fastest ramp to AI cash. Ties into Musk’s Terafab dreams, though that’s vaporware optimism.

Why Is TSMC’s CoWoS Capacity a Total Mess?

TSMC ramps CoWoS from 35k wafers/month end-‘24 to 80k by end-‘25, 130k this year via AP7/AP8 fabs. Still? Nvidia locks 60%+. Blackwell, Rubin—all CoWoS-L. Reports scream full book: CoWoS-L/S booked solid. Google slashes 2026 TPUs by 1M units. TSMC CEO C.C. Wei: AI capacity “three times short” of demand.

Counterpoint sees 80% industry growth in ‘26. Yet Intel snags overflow—firms blocked or speed-hungry. CoWoS-L oversubscribed structurally. EMIB-T slips in, cheaper, scalable.

Short para. Brutal timing for Intel.

Can Intel’s EMIB-T Really Challenge TSMC in AI Packaging?

Here’s my take—the unique angle you won’t find in pressers. Remember 2000s Intel? Dominated fabs, but Itanium flopped partly on packaging rigidity—full interposers too costly for scale. EMIB (2017) cracked that, enabling Ponte Vecchio’s 47 tiles when TSMC balked. EMIB-T? It’s Itanium’s revenge: modular, cost-slashed bridges for AI hyperscalers tired of TSMC queues. Prediction: If those billion-dollar deals seal, Intel Foundry flips profitable by 2027, snagging 15-20% AI packaging share. Not hype—math on costs, capacity, demand.

Intel’s PR spins turnaround tales, but skeptics (me included) eye yields. Still, packaging decouples from nodes. External revenue explodes if HBM4/HBM5 customers bite—Broadcom? Marvell? Overflow from Google, post-TPU cuts.

TSMC expands, sure. But Nvidia’s grip leaves scraps. Intel’s 90% utilization? Killer edge. Bernstein numbers don’t lie.

Wander a sec: Imagine 12-reticle behemoths powering next Grok or Llama clusters. EMIB-T glues ‘em cheap. TSMC? Queue up.

Deep dive payoff. Intel positions as packaging dark horse.

Why Does This Matter for AI Developers and Buyers?

Dev? Shorter lead times mean faster prototypes—no CoWoS waitlists killing roadmaps. Buyers? Cost drops on accelerators—low hundreds vs. four figures cascades to cheaper inference rigs. Architectural shift: Packaging wars mirror node wars ’10s ago. Winners? Flexible scalers like EMIB-T.

One hitch—Intel’s track record. Foundry losses massive. But billions signal traction. Musk tie-in? Pie in sky, yet validates AI focus.

Scale wins. Intel bets big.


🧬 Related Insights

Frequently Asked Questions

What is Intel’s EMIB-T packaging?

EMIB-T upgrades Intel’s bridge tech with TSVs for high-power AI chips, enabling HBM4/5 and massive multi-die scales cheaper than TSMC interposers.

How does EMIB-T compare to TSMC CoWoS?

Cheaper (hundreds vs. $1K), higher utilization (90% vs. 60%), bigger packages (12x reticle vs. 9.5x), but TSMC leads volume—for now.

When does Intel EMIB-T enter production?

Fab rollout starts this year, targeting AI accelerators amid TSMC shortages.

Aisha Patel
Written by

Former ML engineer turned writer. Covers computer vision and robotics with a practitioner perspective.

Frequently asked questions

What is Intel's EMIB-T packaging?
EMIB-T upgrades Intel's bridge tech with TSVs for high-power AI chips, enabling HBM4/5 and massive multi-die scales cheaper than TSMC interposers.
How does EMIB-T compare to TSMC CoWoS?
Cheaper (hundreds vs. $1K), higher utilization (90% vs. 60%), bigger packages (12x reticle vs. 9.5x), but TSMC leads volume—for now.
When does <a href="/tag/intel-emib-t/">Intel EMIB-T</a> enter production?
Fab rollout starts this year, targeting AI accelerators amid TSMC shortages.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Tom's Hardware - AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.