Fowler's AI Feedback Flywheel: We Built It First

Martin Fowler christens the Feedback Flywheel. We've been spinning ours for months, capturing AI's messy signals before they vanish. Skeptical? Read on.

We Beat Fowler to His Own AI Feedback Flywheel – And Made It Machine-Proof — theAIcatchup

Key Takeaways

  • Build machine-readable signals, not just docs – agents won't read memos.
  • Failure guardrails must run at runtime, or they're worthless.
  • Week 4 gets you a working flywheel; automation makes it compound.

Martin Fowler hits ‘publish’ on April 9, 2026. His manifesto on the AI Feedback Flywheel lands like a TED Talk for prompt engineers.

And here’s the kicker—we’d been cranking the same wheel for months. Without reading a word. Because when your team’s drowning in AI hallucinations, you don’t wait for a guru to label it.

Look, credit where due. Fowler nails the problem: every chatbot chit-chat spits out gold—or garbage—and most squads bin it. Poof. Gone. His fix? Capture signals, forge artifacts, watch the flywheel spin faster.

Four signals. Context (your codebase’s dirty secrets). Instructions (prompts that don’t suck). Workflows (step-by-step wins). Failures (AI’s epic faceplants). They birth priming docs, commands, playbooks, guardrails. Simple. Elegant. Obvious?

We’d been running the same system for months. Not because we copied it. We hadn’t read the article yet. We built it because it was the obvious solution to a real problem: every AI interaction generates useful signal, and almost every team throws that signal away.

That’s from the originators—bmdpat.com crew—who claim primacy. Smells like a flex. But fine, they log everything in Obsidian vaults, feed it to autotron agents, bolt on AgentGuard. Machine-readable, not just hipster markdown.

Why Fowler’s Flywheel Feels Like Yesterday’s News

But rewind. This ain’t new. XP teams in the 2000s shared war stories in wikis—context files by another name. retros turned bugs into guardrails. Hell, even waterfall drones had ‘lessons learned’ binders gathering dust.

Fowler repackages it with AI gloss. Signals! Artifacts! Cadence! It’s XP 2.0 for the Grok era. Smart? Sure. But calling it a ‘flywheel’—Amazon’s ghostwriter vibes—reeks of VC pitch deck. Where’s the frictionless spin when your vault bloats to gigabytes?

Our unique twist: this flywheel grinds to a halt without ruthless pruning. History screams it—remember bloated Confluence pages killing dev velocity? Predict this: by 2028, 80% of teams abandon theirs, drowned in stale prompts. Hype cycle spins, then stalls.

Short version? It’s housekeeping with hype.

Is the AI Feedback Flywheel Worth Your Week 1-4 Rollout?

They lay out a four-week bootstrap. Week 1: shared context doc. Dump glossary, workflows, no-gos. Week 2: failure log. Spot patterns, birth constraints. Week 3: budget guard. No rogue $500 agent runs. Week 4: retro prompts into commands, workflows to bots.

Punchy plan. But reality bites. Standups devolve to ‘AI did it again’ therapy. Retros? Glorified gripe fests. Their autotron—CMO, CFO agents—sounds slick, till one hallucinates market data. Then guardrails multiply like roaches.

AgentGuard’s the gem. Pip install, wrap your agent:

from agentguard import Guard
guard = Guard(budget_limit=5.00)
@guard.protect
def run_agent():
    pass

Hard stops on token binges, time sinks. No more ‘oops, $47 on a bad prompt.’ Yet—irony—enforcing failures codifies yesterday’s bugs. Tomorrow’s? Fresh hell.

And the table? Cadence mapped crisp:

Cadence Fowler’s Ours
Daily Standup review Autotron writes shared mem
Weekly Retro playbooks SKILL.md updates

Neat. But quarterly ‘strategy refresh’? That’s where executives nod off.

Why Does the AI Feedback Flywheel Matter for Real Teams?

Skeptics yawn: ‘Just log better.’ Wrong. Compounding hits different. Agent reads primed context, spits better code, logs wins, feeds next run. Exponential? In theory. In practice—marginal gains till a black swan prompt breaks it.

Critique their spin: ‘Every cycle knows more.’ Nah. Entropy wins. Stale signals poison the well. Their machine-readable edge? Agents ingest vaults—runtime config. Humans skim docs. Gap widens; junior devs chase ghosts in markdown.

Dry humor alert: it’s like herding LLMs with Post-its. Effective, till the boardroom demands ROI. Metric? ‘Declining “why did AI do that?”’ Chortle. That’s your North Star?

Yet, bold prediction—teams ignoring this lose to flywheel faithful. Not because magic. Because discipline disguised as pattern.

But call out the PR: ‘Before he named it.’ Cute origin story. Feels like ‘we discovered fire, then Prometheus showed up.’

Wander a bit—implementation lives in Obsidian, autotron, AgentGuard. Vault’s Feedback/ dir: prompt, output, verdict, fix. Maps to signals. Context/ files startup slurp. Failures → guards. Flywheel hums.

Autotron agents chain: read, task, write, queue. Shared memory evolves. Baseline climbs.

Failure’s crux. Notes don’t block reruns. Guards do. Runtime iron fist.

The Hidden Rot in Your Flywheel

One paragraph warning. Overfit to yesterday’s data—AI drifts, flywheel rusts. Quarterly purges or die.

Teams chase shinier toys: o1-preview, Claude 3.5. Flywheel? Maintenance chore. That’s the trap.


🧬 Related Insights

Frequently Asked Questions

What is Fowler’s AI Feedback Flywheel?

A loop capturing AI session signals (context, prompts, workflows, failures) into shared artifacts like docs, templates, playbooks, guardrails. Spins team smarts over time.

How do you implement AI Feedback Flywheel in a week?

Week 1: shared context. Week 2: failure log. Week 3: guards. Week 4: commands, agents. Log everything, review cadence, enforce.

Does AI Feedback Flywheel actually improve teams?

Yes, if you prune ruthlessly—compounds knowledge. Skip it, watch juniors reinvent failures daily.

Aisha Patel
Written by

Former ML engineer turned writer. Covers computer vision and robotics with a practitioner perspective.

Frequently asked questions

What is Fowler's AI Feedback Flywheel?
A loop capturing AI session signals (context, prompts, workflows, failures) into shared artifacts like docs, templates, playbooks, guardrails. Spins team smarts over time.
How do you implement AI Feedback Flywheel in a week?
Week 1: shared context. Week 2: failure log. Week 3: guards. Week 4: commands, agents. Log everything, review cadence, enforce.
Does AI Feedback Flywheel actually improve teams?
Yes, if you prune ruthlessly—compounds knowledge. Skip it, watch juniors reinvent failures daily.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.