Test AI Agent Frontends No API Calls

AI agent UIs are flaky nightmares in tests. One dev's trick: hijack production streams as fixtures. No more real API hits, just pure, replayable determinism.

Replay Hell: Testing AI Agent Frontends with Production Ghost Streams — theAIcatchup

Key Takeaways

  • Turn prod JSONL logs into deterministic test fixtures for AI agent streams—no API calls needed.
  • Replays catch real-world timing bugs mocks miss, like out-of-order parallel tools.
  • This is the VCR pattern reborn for AI; expect it to standardize fast.

Staring at a red CI badge at 2 a.m., coffee gone cold, as the AI agent test flakes again.

Testing AI agent frontends sucks. Straight up. You’ve got your mocks for model calls—fine, whatever. But the streaming layer? That event soup pouring into your UI? Nobody tests it worth a damn. Teams skip it, or they hammer the live API in integration tests, burning cash and begging for timeouts. Flaky. Expensive. Dumb.

Here’s the thing. Those JSONL recordings you’re already spitting out in production? They’re test gold. Disguised as logs, sure, but really? Regression suites on steroids. Capture once, replay forever. No invented mocks. Real timings, real sequences, real chaos—as it happened.

“A .jsonl recording is just a test fixture in disguise. Once you see it that way, your production streams become a regression test suite you’re building automatically, whether you meant to or not.”

That quote nails it. From the dev who built AgentStreamRecorder. Smart cookie.

Why Your AI Agent Tests Are Broken (And Nobody Admits It)

Short answer: state machines hate non-determinism.

You care if isStreaming flips off after ‘done’. If activeTools clears on tool_result. If that 60% progress tick nudges the bar. Parallel tools resolving out-of-order? Crash city, unless your logic’s bulletproof. Server drops mid-stream? Does the UI limp on, or flop?

Mock fetch()? Brittle as hell—misses real event orders. Live API? Slow, pricey, random. Skip it? Your prod UI rots.

Most teams pick door three. Ship half-baked frontends, pray users don’t notice.

Pathetic.

And yet.

How JSONL Replays Fix This Mess

AgentStreamRecorder logs every prod stream. Sessions like this:

{“session”: “f3a2c1b0-…”, “started_at”: “2026-04-01T02:14:00+00:00”, “t”: 0}

{“t”: 0.0, “event”: “token”, “data”: {“text”: “Here is what I found”}}

{“t”: 0.052, “event”: “tool_use”, “data”: {“tool_name”: “web_search”, “tool_use_id”: “tu_1”, “status”: “running”}}

Boom. Timestamps down to milliseconds. Replay at warp speed—10,000x—for tests that fly.

Load with load_sessions(). Spin up an async generator. Patch your agent endpoint. Fire httpx.AsyncClient. Assert events match.

We fixed a nasty bug this way. Parallel tools: fast one finishes first, tool_result lands before slowpoke’s tool_use. activeTools array? Mangled. User screams. We grab the exact prod JSONL. Replay. Boom—reproduced in 0.1ms. Patch. Green CI.

No local mysticism. No API roulette.

Is This the End of Flaky AI Frontend Tests?

Look, it’s not magic. But it’s damn close.

Fixtures now? Path to a JSONL. tool_stream.jsonl. multi_turn.jsonl. error_stream.jsonl. Pytest loves ‘em.

async def replay_as_stream(path: Path, speed: float = 10_000.0):

Sessions load. Gaps slept (or faked fast). Yield SSE lines exact as prod.

Patch app.state.agent = mock_agent. Stream to /chat. Collect lines. Assert equality.

Pure. Repeatable. Covers timing edge cases mocks ignore.

My hot take—and here’s the insight the original misses—this is Rails VCR for the AI era. Back in 2010, VCR cassette’d HTTP for deterministic tests. Saved web dev asses. Now? AI streams are the new HTTP. JSONL cassettes will standardize. Mark my words: six months, every agent framework bundles a recorder. Or dies.

(History rhymes, folks. Ignore at peril.)

But wait—corpse in the room. Prod logging bloat? Privacy? Yeah, scrub PII first. Timestamps? Anonymize sessions. It’s not free lunch. Do the work.

Still. Game-changer.

Why Does This Matter for AI Devs Right Now?

AI agents aren’t toys. They’re eating chat apps, copilots, everything. Frontends must hum—snappy streams, no jank. One dropped tool_result? User rage-quits.

Current state: 80% of teams E2E test live. Waste. Anthropic/OpenAI bills stack. CI minutes evaporate.

This? Zero cost replays. Prod data trains your tests automatically. Scales with users—more chaos captured, better coverage.

Skeptical? Try it. Fork AgentStreamRecorder (open source, duh). Log a session. Replay locally. Watch tests go sub-1ms.

Dry humor alert: It’s like giving your CI a time machine. Back to prod incidents, no flux capacitor needed.

Downsides? Recordings age. Model tweaks break sequences. Refresh fixtures quarterly. Not hard.

Unique angle: BigCos spin this as ‘observability.’ Nah. It’s test infra. Don’t let Vercel/Anthropic repackage your logs as $10k/month SaaS. Own it.

Building Your Own Replay Rig

Start simple. Pipe prod streams to S3. CLI pulls ‘em.

Fixtures dir: real prod slices. Label ‘em—crash_at_token_42.jsonl.

Test every hook. Progress bars. Error banners. Multi-tool limbo.

Parallel? Crank speed to 1.0, stress timing bugs.

We caught out-of-order hell this way. Priceless.

Pro tip: Git LFS those JSONL. Don’t bloat your repo.

The Bug Hunt That Sold Me

User reports: “Tools vanish mid-run.” Prod-only gremlin.

Grab session ID from logs. Slice JSONL. Replay.

Patch. Test. Ship.

Hours, not days. No finger-pointing.

That’s the win.


🧬 Related Insights

Frequently Asked Questions

How do I test AI agent frontends without calling the API?

Capture prod streams as JSONL with AgentStreamRecorder. Replay via async generators in tests. Patch your agent—done.

What is AgentStreamRecorder?

Open-source tool logging AI agent event streams to JSONL. Timings, events, all preserved for replays.

Can JSONL replays handle parallel tools and errors?

Yes—real prod sequences include out-of-order arrivals, drops, everything. Perfect for edge cases.

Wrapping the Replay Revolution

AI testing’s fixed. Mostly.

Grab it. Mock less. Replay more.

Your frontend thanks you.

Marcus Rivera
Written by

Tech journalist covering AI business and enterprise adoption. 10 years in B2B media.

Frequently asked questions

How do I test AI agent frontends without calling the API?
Capture prod streams as JSONL with AgentStreamRecorder. Replay via async generators in tests. Patch your agent—done.
What is AgentStreamRecorder?
Open-source tool logging AI agent event streams to JSONL. Timings, events, all preserved for replays.
Can JSONL replays handle parallel tools and errors?
Yes—real prod sequences include out-of-order arrivals, drops, everything. Perfect for edge cases.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.