AI Business

AI Agents Out of Control: IT Managers Sound Alarm

Forget the utopian visions of AI assistants streamlining your life. Right now, most IT managers are drowning in a digital tsunami of rogue AI agents. Turns out, making them is easy; controlling them? Not so much.

AI Agents Run Amok: 77% of IT Vets Say 'Out of Control' — theAIcatchup

Key Takeaways

  • A significant majority (77%) of IT managers report that AI agents are out of control.
  • Proliferation of AI agents mirrors early, unmanaged cloud adoption issues.
  • Many organizations lack basic oversight capabilities, including 'undo' functions for agent actions.

And just like that, they’re everywhere.

One minute, your team’s just trying to whip up a quick internal chatbot to answer some FAQs, the next, you’ve got hundreds of AI agents, each with a slightly different flavor of access, permissions bleeding like a sieve, and nobody has a damn clue what any of them are actually doing. Sound familiar? If you’re in IT, it should. A new survey from Rubrik ZeroLabs is basically shouting from the rooftops that 77% of IT managers feel their AI agents are teetering on the edge of chaos, and frankly, my eyes aren’t surprised. I’ve seen this movie before, and the ending usually involves a significant pain in the backside – and a fat bill.

The ‘Agent Sprawl’ Phenomenon: Deja Vu All Over Again?

This whole “agent sprawl” situation? It’s got all the hallmarks of the early days of cloud adoption. Remember that? Teams, eager to be agile, just started spinning up cloud instances willy-nilly, using whatever vendor seemed easiest at the time. The result? A fragmented mess of disconnected services, inconsistent governance, and security holes big enough to drive a truck through. Apparently, AI agents are on the same trajectory. Kriti Faujdar from Microsoft is quoted saying pretty much exactly that, and honestly, it’s hard to argue.

“We are already seeing patterns similar to early cloud adoption, where teams spin up agents independently using different frameworks and vendors. This leads to fragmentation, inconsistent governance, and hidden security gaps.”

This isn’t just about a few rogue scripts; it’s a systemic issue. We’re talking about the very real possibility that the supposed productivity gains from these AI agents are being utterly consumed by the manual effort needed to just keep them from breaking things. Eighty-one percent of IT managers are already spending more time babysitting these things than the agents are supposed to save them. And security? Forget about it. Users are disabling VPNs and bypassing security controls just to get their agents up and running. It’s like handing out candy to toddlers in a room full of fragile Ming vases.

Who’s Actually Making Money Here?

This is the question that keeps me up at night. The vendors selling the platforms? Obviously. They’re minting it, selling the dream of effortless AI. But for the companies deploying these things? The math isn’t adding up yet. They’re spending on development, on monitoring, on the inevitable clean-up operation when something inevitably goes sideways. And what are they getting? A bunch of unmanageable AI programs that are actively undermining their own security posture. The irony is almost poetic, if it weren’t so damn expensive.

The ‘Undo’ Button Blues

Here’s another gem: a whopping 86% of IT managers expect AI agent proliferation to outpace their security measures within the next year. More than half think it’ll happen in six months. And the kicker? Nearly everyone admits they lack the basic ability to roll back an agent’s actions. So, an agent messes up – maybe it deletes a critical file, maybe it accidentally exposes sensitive data – and what then? Tough luck? You’ve basically built a digital bomb with no off switch.

This is where the conversation needs to shift from ‘how fast can we deploy AI?’ to ‘how do we build AI responsibly?’ Nik Kale, a principal engineer with the Coalition for Secure AI, nails it: “Any team with API access can spin up an agent in an afternoon. Multiply that across a large enterprise, and you get hundreds of agents with overlapping permissions, no consistent identity model, and no one who can tell you the full inventory.” This isn’t a minor glitch; it’s a foundational problem.

Is This the Future of Work, or Just a Future Headache?

Looking at the five post-deployment questions IT managers need to answer according to the Rubrik report – What did it do? Why did it do it? What did it touch? Did it succeed safely? Where did it fail? – it’s clear we’re not there yet. These are basic auditing questions that, right now, are largely going unanswered. Without this visibility, how can anyone define acceptable behavior, audit access, implement human oversight, or, you know, prevent disaster?

Kriti Faujdar again: “Organizations want to move fast, but without clear guardrails, they risk creating systems that are difficult to trust, audit, or scale. The winners will be those who treat agent management not as an afterthought, but as a first-class discipline.” That’s the crux of it. This isn’t just another software update; it’s a new paradigm of autonomous systems that requires a whole new level of discipline, a level most organizations are demonstrably not at.

And don’t even get me started on model drift. Renze Jongman, founder of Liberty91, points out that the very foundation models these agents are built on change over time. The agent you certified last quarter might be behaving completely differently now. Your governance model needs to be a rubber band, not a rigid stick.

FAQs

What does AI agent sprawl mean? AI agent sprawl refers to the uncontrolled proliferation of AI agents within an organization, leading to fragmentation, inconsistent governance, and security vulnerabilities.

Will this replace my job? While AI agents can automate tasks, they also create new roles for managing, auditing, and securing them. The immediate concern is that poorly managed agents are more likely to cause problems than provide significant benefits.

How can I control my AI agents? Effective control involves establishing clear governance policies, maintaining an inventory of all agents, implementing strong auditing and telemetry, defining acceptable behavior, and ensuring proper security guardrails are in place, including mechanisms for rollback.


🧬 Related Insights

Aisha Patel
Written by

Former ML engineer. Covers computer vision, robotics, and multimodal systems from a practitioner perspective.

Frequently asked questions

🧬 Related Insights?
- **Read more:** [Claude Code Leak: 500K Lines Spill Anthropic's Agent Secrets](https://theaicatchup.com/article/ainews-the-claude-code-source-leak/) - **Read more:** [Anthropic's Secret Claude Mythos AI Digs Up Thousands of Unpatched Software Flaws](https://theaicatchup.com/article/anthropic-says-its-latest-ai-model-can-expose-weaknesses-in-software-security/)

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by ZDNet - AI

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.