I watched a Cursor window flicker open on someone’s laptop, and they closed it within ten minutes.
That wasn’t laziness or technophobia. It was the sound of someone’s fifteen-year-old workflow colliding with an entirely different way of thinking. The suggestions came too fast. The mental model didn’t fit. The natural response? Shut it down, promise yourself you’ll come back when things slow down, and never touch it again.
But here’s what’s haunting: that engineer wasn’t alone. Not even close.
The Mandate Email Nobody Wants to Receive
When leadership decides AI is non-negotiable, they send an email. Sometimes it’s polite. Sometimes it’s packed with buzzwords. But the message is identical across companies: all teams will integrate AI tools by end of quarter. Adoption metrics to follow. No conversation. No pilot. No feedback loop. Just compliance.
I found myself reading through a thread with over 500 experienced engineers—senior engineers, staff engineers, tech leads, people who’ve been building systems for decades—describing what happened when their companies went this route. The frustration wasn’t about the tools themselves. It was about the arrival method.
The pattern was consistent: Leadership sends an email. All teams will integrate AI tools by end of quarter. No conversation about which types of work actually benefit from AI and which don’t. No pilot. No feedback loop. Just a mandate and a number to hit.
So what happens? For months, everything looks fine. Adoption metrics climb. Velocity holds. The quarterly review slide looks clean. Then, quietly, something starts to degrade. Decisions that used to come from judgment start coming from autocomplete. Engineers who were exceptional at the hard parts stop being asked to make the call. Engineers who never fully built a system from scratch can’t debug it when it breaks.
The failure curve is invisible until it isn’t. By the time it shows up in production, leadership’s already moved on.
Is Your Company Actually Using AI, or Just Complying With It?
Here’s the thing mandates get wrong: adults respond to mandates the same way kids do. They push back.
Sometimes loudly. Sometimes quietly. But the moment you corner someone, tell them “this is how you work now,” and measure whether they’re complying—you haven’t driven adoption. You’ve driven theater. Engineers will use the tool on the measurable tasks and quietly stop applying full judgment everywhere else.
And it’s worse than that. When you don’t make explicit distinctions about where AI helps and where it doesn’t, engineers stop making them too. The boilerplate work and the critical path start to blur. The tech lead who would have flagged it in review? The staff engineer who would have pushed back in planning? They stop being asked.
Compliance dashboards don’t capture where AI actually helps. They only capture whether someone opened the tool. That’s not a metric. That’s theater with numbers.
Where AI Actually Wins (And Where It Doesn’t)
Let’s be clear: AI is exceptional at certain things.
Boilerplate. Testing scaffolds. Exploring an unfamiliar API. Generating the skeleton for something you already know how to build but don’t want to type for three hours. That’s real use. That’s actual time returned to the work that requires your full brain.
But here’s where mandate culture breaks down: the moment you require usage without making these distinctions explicit, you lose the judgment that made the distinction in the first place.
Nuanced system design? Security-critical paths? Code that needs to survive five years of edge cases from customers you haven’t met yet? Those aren’t boilerplate. And when adoption becomes mandatory, the engineer who would have caught the difference—who would have said “not here”—stops being asked.
What Actually Works: The Two-Week Experiment
When I introduced AI to my team at Converse, I didn’t send a mandate email.
I gave them two weeks of blocked time to explore without delivery pressure. Not meetings. Not deliverables. Just room to try things without a deadline breathing down their necks. But before I gave my team anything, I had to give myself time with it first.
It took me about a month to come back to Cursor after that first session. My tech lead had been using it and kept pushing. He was already deep in it—building workflows, training agents, figuring out where it actually saved time and where it introduced noise. I eventually came back. Built my own workflows. Started understanding, from lived experience, what this tool actually did.
And here’s the difference: when I finally recommended it to the team, I wasn’t speaking from a mandate. I was speaking from exploration. From real patterns I’d discovered. From the judgment that comes from having sat with the resistance, pushed through it, and found what’s actually on the other side.
That’s not compliance. That’s adoption.
The Fear Isn’t About the Tool—It’s About Being Told
That initial resistance I felt? It wasn’t a character flaw. It wasn’t a performance issue. It was the same gut reaction I’d seen in junior engineers resisting new frameworks, tech leads protecting workflows that had stopped scaling. I hadn’t expected to see it in myself.
But I recognized it because it’s universal. When something arrives as a mandate, it triggers a defensive response. You’re not being asked to evaluate it. You’re being told to use it. The difference is subtle and everything.
The organizations I saw struggling weren’t measuring the wrong things. They weren’t measuring anything that actually mattered. They were measuring compliance—which dashboards never capture. They were optimizing for adoption metrics instead of asking: Where does this help? Where does it hurt? What judgment did we lose?
And by then, the damage was already spreading in the codebase.
What Changes Now
AI is a platform shift. A fundamental one. But platform shifts don’t work when they arrive as mandates. They work when they arrive as invitations—with time, judgment, and permission to say “not here.”
The mandate email will keep getting sent. Companies will keep measuring adoption metrics. Engineers will keep complying while quietly protecting the parts of their work that actually matter. And in six months, someone will wonder why the system that was built with AI scaffolding is mysteriously fragile, why the edge cases keep emerging, why the person who could have caught it wasn’t asked.
The organizations that win won’t be the ones that mandate fastest. They’ll be the ones that give their best engineers time to think, space to resist, and trust enough to listen when they do.
That’s not a metric you can put on a slide. But it’s the only thing that actually works.
🧬 Related Insights
- Read more: A Production Bug Made 73% More Revenue Than Any Feature We Built. Here’s What It Taught Us.
- Read more: 52 Minutes to 19: Slicing GCP Deploy Time with a Ruthless CI/CD Overhaul
Frequently Asked Questions
What does Cursor actually do? Cursor is an AI-powered code editor that generates code suggestions, helps with scaffolding and boilerplate, and assists with API exploration. It’s most effective for routine coding tasks, not for the nuanced system design or security-critical work that requires full human judgment.
Why do engineers resist AI tools at work? Resistance isn’t about technophobia—it’s about control. When tools arrive as mandates without exploration time, engineers respond defensively (just like anyone else would). They comply with metrics but stop fully exercising judgment on the work that actually matters.
How should companies introduce AI tools to teams? Instead of mandates: give engineers blocked exploration time without delivery pressure, let leadership experience it first before recommending it, make explicit distinctions about where AI helps versus where it doesn’t, and measure what actually matters (judgment preserved, code quality, edge cases caught) instead of just adoption rates.