Adi Polak’s voice cuts through the podcast static: ‘Role assignment? That’s going away fast.’
She’s not mincing words on the InfoQ mic, and damn if it doesn’t hit like a cold splash of reality in this endless AI hype parade.
Zoom out: this is Thomas Betts grilling her on context engineering, the next supposed leap beyond prompt engineering. You’ve heard the buzz — stateless prompts turning into stateful smarts for agentic systems. But let’s cut the fluff. I’ve chased Silicon Valley unicorns for two decades, and every ‘revolutionary’ shift smells like yesterday’s database wars repackaged for LLM addicts.
Why Ditch Prompt Engineering Already?
Prompt engineering started as a desperate hack. Tell the model, ‘You’re a backend wizard fluent in Spark’ — boom, it spits code. Worked great on GPT-3.5. Now? Models are fattened up, tooling’s everywhere, and that role-play feels like training wheels on a Ferrari.
Adi nails it:
“Role assignment for a very, very long time, role assignment was one of the key pattern of how we work with the models. You’re an experienced backend software engineer specialized in Apache Spark, for example. And now the model assumes you need to focus more on these specific technologies… now that role assignment is slightly going away.”
Few-shot examples? Chain-of-thought? They’re dinosaurs too. Models self-prompt now, looping feedback without your hand-holding. It’s evolution, sure — but who’s banking on it? Not the solo dev grinding prompts till 3 AM.
Here’s the thing. Prompting’s stateless. Fire and forget. Context engineering? That’s building memory palaces for your AI. Load just the right data, silo long-term knowledge from session scratchpad, manage costs like a hawk. Suddenly, your agent’s not a goldfish; it’s got history.
But wait — is this just prompt engineering with extra steps?
Is Context Engineering Worth the Hype for Real Work?
Short answer: maybe, if you’re scaling.
Adi pushes domain smarts over tricks. Specify steps, constraints, outcomes — because generic prompts flop on complex tasks. Save winning workflows as ‘reusable skills.’ Scale teams without reinventing the wheel every sprint.
Think event-driven patterns. Agentic workflows that automate engineering grunt work, enrich data, orchestrate multi-step dances. Sounds dreamy. But strip the PR: this screams Kafka. Adi’s at Confluent, peddling stream processing for AI. Coincidence? In my book, no.
My unique hot take — and you’ll not find this in the transcript: this mirrors the 90s shift from flat-file hacks to relational databases. Back then, coders jury-rigged CSV sorcery for ‘apps.’ Oracle and pals sold state, schemas, ACID. Chaos to control. Today? Prompt cowboys to context czars. Who’s cashing in? The infra giants like Confluent, stuffing event streams into every agent pipeline. History rhymes, profits don’t lie.
And costs? Oh boy. Stuff the context window wrong, and you’re torching tokens like a kid with daddy’s credit card. Smart management — only load what’s needed — slashes bills 10x. But it demands engineers who grok your domain, not ChatGPT tourists.
Look, I’ve seen buzzwords bury better ideas. ‘Agentic systems’ feels like microservices fever dream 2.0 — promise autonomy, deliver distributed debugging hell.
Who Actually Makes Money on Context Engineering?
Follow the dough.
Not you, indie hacker, tweaking prompts in Notion. It’s the platform wranglers. Confluent’s betting big: stateful AI needs event buses. Enrich data on the fly? Kafka streams. Coordinate agents? Pub-sub magic.
Adi hints at it: ‘Agentic, stateful workflows built on event-driven patterns are becoming essential.’ Translation: buy our cloud.
Skeptical? Damn right. Prompt engineering was free(ish). Context? That’s pipelines, storage, orchestration. Vendors salivate. Remember when ‘big data’ meant Hadoop consultants got Lambos? Same playbook.
Yet, ignore it at peril. Stateless AI plateaus fast. Stateful wins at scale — think autonomous dev agents fixing deploys while you sleep. Or hallucinating less because grandma’s recipe book’s in long-term memory, not crammed per query.
Practically? Start small. Profile your context: what’s hot path (session), what’s cold (archive). Tools mature — vector DBs, RAG frameworks — but domain knowledge trumps all. No template saves a clueless prompt jockey.
Prediction: by 2026, 80% of enterprise AI flops because teams skip this. They’ll chase shiny models, ignore plumbing. Seen it before.
Why Does Context Engineering Matter for Developers Right Now?
Devs: your job’s mutating.
Prompts were copy-paste fun. Context? Systems thinking. Design memory flows, trigger events, build agent swarms. It’s SRE meets ML.
Adi warns: practices evolve fast. Constrained specs? Feed your README, langs, SLAs. Model digests, outputs compliant code. But grain of salt — models glitch.
Real win: reusable skills. Nail a workflow? Package it. Team reuses, iterates. No more ‘works on my machine’ AI.
Cynical upside? Less hallucination drudgery. More building.
Downside? Infra lock-in. Who’s your context kingpin? OpenAI? Anthropic? Or self-hosted rebellion?
Wrapping the skepticism: context engineering’s legit progress. Ditches gimmicks for architecture. But hype it as savior? Nah. It’s plumbing. Vital, unsexy plumbing. Pay the toll, or stay stateless forever.
Frequently Asked Questions
What is context engineering?
It’s evolving prompts into stateful systems — managing memory, loading precise data, building agent workflows that remember across sessions.
How does context engineering differ from prompt engineering?
Prompts are one-shot, stateless tricks like roles or few-shots. Context adds persistence, events, domain-tuned memory for scalable AI.
Is context engineering replacing prompt engineering?
Not fully — prompts evolve inside it. But old hacks fade as models smarten; now it’s about architecture over artistry.