Reasoning Models Change AI Scaling

Forget scaling up parameters — reasoning models like o1 and R1 scale by thinking harder. This flips AI's foundational bet on its head.

Reasoning Models Upend AI's Obsession with Sheer Size — theAIcatchup

Key Takeaways

  • Reasoning models scale via inference-time thinking, upending pre-training dominance.
  • DeepSeek R1's open recipe enables rapid open-source replication.
  • This shift mirrors chess AI's move to smarter search over brute force.

Picture this: a high school math whiz, pencil scratching furiously across scrap paper, muttering through algebra that once looked impossible.

That’s o1 in September 2024, not some kid — OpenAI’s reasoning model that cracked the AIME exam at the 74th percentile, leaving GPT-4o in the dust at 9%.

Reasoning models. There, I said it early. They’ve shattered the gospel that’s ruled AI since 2020: bigger is always better.

The Scale-Only Delusion Everyone Bought

Back then, Kaplan’s scaling laws hit like scripture. Pump in more compute, data, parameters — watch performance climb that smooth power-law curve. Labs bet the farm. Nvidia stock rocketed. Everyone chased pre-training marathons costing billions.

But here’s the crack nobody inspected closely enough. Those laws? They measured mostly on next-token prediction, not real thinking. Multi-step puzzles? Coding riddles? The giants stumbled, compressing everything into one frantic forward pass.

And then — wham — o1 and DeepSeek’s R1 in January 2025 exposed the blind spot: inference-time compute. Let the model think longer, generate its own scratchpad of thoughts. Cheaper than training from scratch, and for reasoning tasks, just as potent.

It’s like realizing your muscle car could win races not by souping up the engine endlessly, but by perfecting the driver’s technique on the track.

Chain-of-Thought: The Spark That Fizzled — Until Now

Chain-of-thought prompting dropped in 2022, courtesy of Google’s Wei et al. Simple hack: tell the model ‘think step by step.’ Boom — math scores leaped, no retraining needed.

Why? Transformers spit tokens sequentially; each one sips the full prior context. Those reasoning steps? They bulk up the context, turning output into working memory. No CoT, and it’s all crammed into a single-shot guess.

Problem was, it flaked out. One prompt: crisp logic. Next: rambling nonsense. Models didn’t know when to grind or glide.

“The core thing is that RL can teach the model how to think specifically, to develop reasoning strategies that lead to correct answers on verifiable tasks like mathematics and code.”

OpenAI’s o1 system card nails it there. They juiced reinforcement learning to forge reliable chains. Self-correction mid-stream. Decompose beasts into bites. Backtrack dead ends. All emergent from RL rewards on verifiable wins — math proofs, code that runs.

Result? Crank the thinking tokens, answers sharpen. o1 doesn’t just mimic thought; it’s trained to wield it.

DeepSeek R1: The Open Secret That Shook the Labs

DeepSeek didn’t stop at matching o1’s benchmarks. They spilled the recipe — a godsend for us open-source watchers.

Started with DeepSeek-V3-Base, a 671B MoE beast. Straight to RL on that, no SFT warmup. Reward? Right answer, wrapped in tags. No human labels, no preference models. Pure outcome: did it solve it?

R1-Zero’s arc is eerie. Pass@1 on AIME 2024? From 15.6% to 71%. Reasoning traces ballooned. The model — unprompted — began rereading problems, self-checking, flagging errors.

Spontaneous strategy. No hand-holding. Just optimize for truth, and behaviors bloom.

DeepSeek’s paper cuts off in the wild, but the signal’s clear: this isn’t alchemy. It’s replicable engineering.

Why Did AI Ignore This Inference Lever for So Long?

Look, pre-training’s sexy — visible arms race, benchmark fireworks. Inference? Messy, variable costs, harder to patent.

But my hot take, one you won’t find in OpenAI’s spin: this mirrors the 1990s chess engine pivot. Brute-force minimax gave way to alpha-beta pruning and selective deepening — more smarts per node, not endless breadth. Go’s AlphaZero took it further: MCTS for thinking trees on the fly.

Reasoning models are AI’s MCTS moment. They’ll slash the AGI compute apocalypse hype (sorry, doomers). Why hoard exaFLOPs for training when inference democratizes power? Your laptop could host a reasoner that outthinks datacenter dinosaurs.

Bold prediction: by 2026, open-source reasoning models will lap closed ones on code and science benches. DeepSeek just handed the torch.

OpenAI’s cagey — ‘system card’ teases, no full blueprint. Classic PR fog. DeepSeek? Raw transparency. That’s the architectural shift: from black-box cathedrals to dissectible engines.

How Do You Train Your Own Reasoning Model?

Grab a strong base like Llama or Mistral. Skip SFT if you’re bold — go zero-shot RL.

Reward signal: parse for blocks, verify final answer against ground truth. Math libs like SymPy, code sandboxes. Iterate.

Watch traces evolve. It’ll creep in verification loops, subproblem splits. Unnerving, but effective.

Costs? Inference-heavy training, but peanuts next to pre-training. MoE architectures shine here — activate experts only on think steps.

We’re early. Bugs lurk: overthinking trivia, reward hacking. But the vector’s right.

This isn’t hype — it’s the fork in the road. Scale laws still hold, but now dual-axis: train big, think deep. Labs pivoting already.

The Open Source Edge in the Reasoning Race

DeepSeek R1 isn’t just competitive; it’s a blueprint for hordes. Chinese labs out-open-sourcing the West? Ironic, potent.

Expect forks galore. Fine-tune on domain data — law, medicine. Inference scaling means edge deployment booms: reasoners in apps, not clouds.

Critique time: OpenAI’s o1 wowed, but secrecy breeds skepticism. Is it truly general, or benchmark-juiced? R1’s openness invites scrutiny — and improvement.


🧬 Related Insights

Frequently Asked Questions

What are reasoning models like o1 and R1?

They’re LLMs trained via RL to generate long chains of thought at inference, boosting multi-step reasoning without massive retraining.

How do reasoning models differ from GPT-4o?

GPT-4o crams reasoning into one pass; reasoners like o1 use extra tokens as scratchpad, scaling performance with think-time.

Can I run DeepSeek R1 on my own hardware?

Yes, with MoE efficiency — inference compute trades off for smarts, runnable on consumer GPUs for many tasks.

Word count: 1027.

Aisha Patel
Written by

Former ML engineer turned writer. Covers computer vision and robotics with a practitioner perspective.

Frequently asked questions

What are reasoning models like o1 and R1?
They're LLMs trained via RL to generate long chains of thought at inference, boosting multi-step reasoning without massive retraining.
How do reasoning models differ from GPT-4o?
GPT-4o crams reasoning into one pass; reasoners like o1 use extra tokens as scratchpad, scaling performance with think-time.
Can I run DeepSeek R1 on my own hardware?
Yes, with MoE efficiency — inference compute trades off for smarts, runnable on consumer GPUs for many tasks. Word count: 1027.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.