AI Agents Without Context Are Speed Traps

You've got 21 AI agents cranking out PRs. Impressive. But they're all taking orders from one person's head—and that person is about to break.

Why AI Agents Are Making Your Team Dumber, Not Smarter — theAIcatchup

Key Takeaways

  • AI agents are fast because they execute tasks from one person's brain—they don't generate the strategy that makes those tasks worthwhile
  • Business context (why a feature exists) and system context (how it's built) are different things and belong in different systems
  • Organizations are optimized to record what changed, not why—a legacy of Git culture, Agile interpretation, and metrics-driven management
  • The real bottleneck in AI agent systems is the human who validates starting points, and you've made them even more critical, not less

I watched a senior engineer at a Series B startup explain their new AI agent system with genuine pride, and all I could think was: who’s going to remember why you built it this way when she leaves?

That question sits at the heart of a problem nobody’s talking about in the AI-for-developers space. We’re so obsessed with velocity—GitHub Issues trigger agents, agents write code, humans sleep while PRs merge—that we’ve completely ignored the structural weakness baked into the whole operation. And it’s not new. We’ve just made it faster.

The PR That Hides the Why

Say your team adopts an AI agent system. The agent reads a GitHub Issue, writes code, opens a PR, passes CI. Clean. Fast. Shipping feels effortless.

But here’s what actually happened: someone—one specific person—decided what the Issue should say. They decided what problem matters. They decided which solution to propose. The agent didn’t generate strategy. It generated tactics. And there’s a chasm between those two things.

When Speed Becomes a Liability

“If your pipeline can produce 600 PRs a month, the human who validates the starting point of each one is not keeping up.”

That’s the real equation nobody’s solving. You’ve optimized for velocity. You’ve ignored the bottleneck.

A startup recently shipped 21 AI agents in two months. The setup is mechanically beautiful: GitHub label triggers agent, agent writes code, human wakes up to a merged PR. It’s the future, sort of. Except the entire system depends on one person deciding which Issues get opened, and that person’s brain is the actual constraint. They’re the one carrying the product strategy, the architectural decisions, the abandoned experiments, the reason this feature exists at all. None of that lives in the repository. All of it lives in one person’s head.

Roll that forward six months. That person gets promoted. Or burned out. Or leaves. The velocity machine doesn’t slow down gracefully—it crashes into a wall because nobody else knows why the system is shaped the way it is.

The Docs Lie You Tell Yourself

When someone points this out, the reflex is always the same: “Put it in docs/.” Everything that matters goes into a Markdown file. Problem solved.

Except it’s not solved. It’s hidden.

There are two different kinds of context, and almost nobody treats them as separate things. System context answers the “how” question—architecture diagrams, API contracts, data models. That stuff already lives in source code. Rewriting it in docs/architecture.md creates a second source of truth, and second sources of truth drift.

But business context answers the “why.” What customer problem does this solve? What was tried before and why did it fail? What constraints shaped the decision? What is the team optimizing for, and what was deliberately left out? Business context has a different lifecycle than code. Code changes constantly. The reason a feature exists might not change for three years. When you put business context inside a code repository, you’re tying it to the wrong clock. It gets treated like code—versioned when changed, ignored when it’s stable.

Worse: if you’ve got three repositories (mobile, backend, admin panel), business context gets split across all three. But the reason a feature exists isn’t a mobile concern or a backend concern. It’s a business concern. And a business is one thing, not three.

How We Got Here

This didn’t happen by accident.

Pull Request culture came from a specific place: Linus Torvalds needed to manage patches from thousands of distributed contributors. The unit of work was a diff. The record was a diff. The review was a diff. Technically elegant. Philosophically narrow. A diff shows what changed. It says nothing about why the change mattered, what alternatives existed, or what assumptions the change makes about the future.

Then Agile arrived and added a cultural permission: “Working software over comprehensive documentation.” Fair enough in context. But teams interpreted that as: “We don’t have to write down why we made decisions.” Different thing entirely.

From the other direction, MBA culture reinforced it. What can be measured can be managed. PRs count. Commits graph. Velocity reports. But you can’t put “why we made this decision” into a dashboard. So it stopped being tracked.

Three forces converged: distributed version control, Agile philosophy, and management by metrics. And they all pointed toward the same outcome: organizations that excel at recording what changed and fail spectacularly at preserving why.

The Human Pillar Problem

In every team over a certain size, there’s at least one person who holds the whole picture.

They know why authentication is built that way. They know what the database schema looked like before the migration. They know which customers drove which decisions. Everyone leans on this person. They answer the same questions repeatedly. They review PRs not for code correctness—the tests handle that—but to enforce unwritten architectural assumptions that live only in their head.

Companies call this the “bus factor.” Misleading term. It implies the risk is a literal bus hitting the person. Slower and more common: the person gets tired. Promotion, burnout, slow disengagement. Or the codebase grows faster than one human can track, and the pillar cracks.

AI agents don’t solve this. They accelerate it into crisis.

Speed Without Reasoning Is Just Expensive Noise

There’s a meaningful difference between moving fast and reasoning well, and we’ve conflated them.

AI agents move fast when they’re handed a queue of well-formed tasks. Tasks that already contain the judgment, the context, the constraints. The 21-agent startup moves fast because one person is, essentially, doing the thinking and handing bite-sized assignments to machines that execute well. It feels like the agents are intelligent. They’re not. They’re fast. The intelligence is upstream.

What you actually want is AI that reasons about constraints you didn’t explicitly state. That questions assumptions. That says, “Wait, if we build it this way, what breaks when the business grows?” That requires context. Real context. Not API documentation. Not commit logs. The actual why.

What Actually Needs to Change

Start separating system context from business context. They belong in different systems.

System context lives in code. It’s version-controlled with code. It’s reviewed with code. It changes as code changes. Good.

Business context belongs in a separate, accessible system that’s designed around decision-making, not diffs. What problems are you solving? What alternatives did you reject? What constraints matter? What are you optimizing for? Archive that outside the repository. Make it the source of truth for “why.” When issues are opened, they reference that context. When PRs are reviewed, the context grounds them.

Then AI can work from something real. Not from one person’s head. From actual reasoning about actual constraints.

The startup with 21 agents is impressive until the day it isn’t. Fast doesn’t matter when you’re moving in the wrong direction because nobody remembers why you started moving in the first place.

FAQ

Will AI agents replace senior engineers? No. They’ll replace junior engineers who follow instructions. The real bottleneck—the person who decides what the instructions should be—is still human. That person gets more valuable, not less. Until they leave.

How do I prevent my team from becoming dependent on one person? Stop pretending business context belongs in a code repository. Build a separate system—wiki, internal knowledge base, whatever—that’s designed around decisions, not diffs. Make it accessible and searchable. Tie it directly to your project management system. When an Issue is created, it should reference the business context that justifies it.

Can AI agents help preserve institutional knowledge? Not without changing how you store it. Agents work from whatever input they get. If your input is “GitHub Issue written by one person,” you get fast execution with fragile foundations. If your input includes access to documented decision-making, constraints, and alternatives, you get something closer to actual reasoning.


🧬 Related Insights

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

🧬 Related Insights?
- **Read more:** [KubeVirt 1.8 Kills the VMware Argument (And Broadcom Knows It)](https://opensourcebeat.com/article/kubevirt-18-kills-the-vmware-argument-and-broadcom-knows-it/) - **Read more:** [Why Your SQL JOIN Is Silently Killing Your Data (And How I Learned This the Hard Way)](https://opensourcebeat.com/article/why-your-sql-join-is-silently-killing-your-data-and-how-i-learned-this-the-hard-way/)

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.