AI Research

Import AI 445: Superintelligence & Scaling Laws

Jack Clark's Import AI 445 hits hard: Is 2026 the year we decide on superintelligence? Meanwhile, Facebook's Kunlun cracks scaling laws for recommenders, eyeing massive ad gains.

Graph of Kunlun scaling laws showing normalized entropy drop with compute

Key Takeaways

  • Human-touch jobs thrive even as AI advances, boosting demand for premium human services.
  • Facebook's Kunlun unlocks recommender scaling laws, promising huge ad efficiency gains.
  • 2026 may prompt singularity policy debates, but superintelligence remains a 2050s bet.

Jack Clark’s Import AI 445 landed in inboxes yesterday, forcing readers to confront a stark question amid arXiv’s latest deluge.

Will 2026 go down as the year we locked in superintelligence timelines? That’s the opener in this edition, riffing on singularity debates while unpacking AI’s math breakthroughs and a fresh ML benchmark. But let’s cut through — Import AI 445 isn’t just hype; it’s a data-packed dispatch on where AI research stands today.

Don’t Bet on AI Wiping Out All Jobs Yet

Economists are pushing back. Hard.

Adam Ozimek, chief economist at the Economic Innovation Group, drops a Substack bomb: even super-smart AI won’t kill demand for humans. Why? People crave the ‘human touch’.

“There are many jobs and tasks that easily could have been automated by now - the technology to automate them has long existed - and yet we humans continue to do them,” he writes. “The reason is that demand will always exist for certain jobs that offer what I call “the human touch.”

Live music. Actors. Waiters. Fancy concierges. These aren’t going anywhere. Ozimek calls it a ‘normal good’ — demand spikes as wallets thicken. Picture this: AI flips burgers perfectly, but you still tip the waiter at that Michelin-star spot for the banter.

My take? He’s onto something market-driven. Data from services sectors shows premium pricing for human elements — think 20-50% markups on ‘artisanal’ experiences. If AI automates the grunt work, we’re staring at a boom in high-wage ‘human artisan’ roles. Governments could juice wages via UBI or training subsidies. But here’s my unique angle: this mirrors the 19th-century craft revival post-Industrial Revolution, when machine-made textiles flooded markets, yet bespoke tailors thrived among the elite. History says humans pay for soul — AI just amplifies the divide.

Short version: Unemployment panic? Overblown. Markets will reroute labor to what bots can’t fake.

Facebook’s Kunlun: The Ad Machine Gets a Predictable Turbocharge

Shift gears to industrial AI. Facebook — Meta, whatever — just open-sourced details on Kunlun, their slick new recommender system.

It’s not sexy like LLMs. But it’s printing money.

Kunlun crushes efficiency: MFU jumps from 17% to 37% on NVIDIA B200s. That’s Model FLOPs Utilization, for the uninitiated — a key metric for how much of your GPU juice turns into actual smarts. Recommenders lag LLMs here (3-15% vs. 40-60%) thanks to wonky features like irregular tensors and tiny embeddings.

But the killer? Scaling laws. Finally.

Facebook mapped power-law improvements in normalized entropy (NE) against gigaflops and layers. Pump in more compute, watch NE plummet predictably — just like Chinchilla laws for language models. This de-risks mega-investments. Why care? Ads are Meta’s lifeblood, shaping billions’ feeds and buys.

Kunlun packs a Transformer block with GDPA-enhanced feeds and a Interaction block for feature mingling. Tricks abound: personalized weights, sequence summaries. Read the paper if you’re building.

Why Do Scaling Laws Matter More for Recommenders Than LLMs?

LLMs? We knew their laws years ago — loss drops, capabilities pop.

Recommenders? Trickier beasts, blending user sequences with context chaos.

Facebook cracked it. And here’s the market dynamite: predictable scaling means Meta can hurl unprecedented FLOPs at this without black-box roulette. Expect ad revenue to swell — my bold call, 15-25% efficiency gains by 2026, compounding to billions. Echoes Google’s AdWords scaling in the 2000s, when auction dynamics turned search into a cash volcano. Big Tech doesn’t sleep on this.

Skeptical? Fair. Recommenders aren’t ‘frontier’ like math proofs. But they fund the frontier.

AIs Crack Frontier Math — And a New Benchmark Emerges

Import AI 445 flags AI’s push into bleeding-edge math proofs. Models now tackling Olympiad-level stuff, edging toward formal verification.

Details sparse in the newsletter, but arXiv whispers of Lean-assisted proofs and AlphaProof successors. Why seismic? Math is AI’s Everest — pure reasoning, no data cheats.

Paired with a new ML research benchmark. Think GLUE for next-gen evals: standardized tests for scaling, safety, whatever. Researchers crave this; it levels the field beyond leaderboards.

My position: Bullish signal. If math falls, superintelligence timelines compress. But 2026? Too pat. Markets price in 2030 medians from Metaculus — Clark’s pivot year feels like PR spin for policy urgency.

Will 2026 Really Be the Superintelligence Reckoning?

Clark poses it starkly: Decisions on the singularity by 2026?

Data says nah. Expert surveys peg median AGI at 2040ish, superint at 2050+. Compute trajectories help — 10x yearly — but bottlenecks loom: energy, chips, alignment.

Yet. Frontier math wins accelerate the clock. If proofs scale like Kunlun NE, we’re in uncharted territory.

Don’t sleep. But don’t panic-buy bunkers.

Why Does This Matter for Developers?

Industrial folks: Study Kunlun. Fork it. Those scaling laws? Gold for your recsys.

Researchers: Hit that new benchmark. It’ll define careers.

Everyone: Human-touch bets pay off. Train for it.

Import AI 445 reminds us — AI’s dual track: Sci-fi timelines, dollar-driven reality.

**


🧬 Related Insights

Frequently Asked Questions**

What is Facebook’s Kunlun recommender?

Kunlun’s Meta’s optimized recsys boosting MFU to 37%, with scaling laws for NE via more FLOPs and layers — key for ad feeds.

Will AI cause mass unemployment by 2026?

Unlikely. Human-touch demand (live music, concierge) persists and grows with incomes, per economists like Ozimek.

When will superintelligence arrive?

Surveys say 2050 median; 2026 forces decisions but not arrival — watch math proofs as leading indicator.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

What is Facebook's <a href="/tag/kunlun-recommender/">Kunlun recommender</a>?
Kunlun's Meta's optimized recsys boosting MFU to 37%, with scaling laws for NE via more FLOPs and layers — key for ad feeds.
Will AI cause mass unemployment by 2026?
Unlikely. Human-touch demand (live music, concierge) persists and grows with incomes, per economists like Ozimek.
When will superintelligence arrive?
Surveys say 2050 median; 2026 forces decisions but not arrival — watch math proofs as leading indicator.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Import AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.