qrec: Claude Code Session Recall Tool

You've sunk weeks into Claude Code decisions, all trapped in JSONL hell. qrec pulls them out, locally, no cloud nonsense. But does it stick the landing?

1,000 Claude Sessions Forgotten: The Local Tool That Forces AI to Remember — theAIcatchup

Key Takeaways

  • qrec indexes Claude Code sessions locally with hybrid search, enabling cheap context recall.
  • Unplanned handoff feature slashes context usage to 15% mid-session.
  • Critique: Anthropic's fresh-start design forces DIY memory fixes like this.

1,000+ Claude Code sessions. That’s what it took to snap.

Every fresh chat? A blank slate. Weeks of tool calls, thinking blocks, buried in ~/.claude/projects/. JSONL files nobody reads.

Enter qrec. A local session recall tool I — wait, no, the dev — built to strip the noise. Indexes with BM25, semantic search, RRF fusion. Claude agents fetch past decisions on demand. Zero tokens burned. All on your machine.

Here’s the thing. Claude Code’s amnesia isn’t a bug. It’s a feature, if you’re Anthropic selling API calls. But for mortals grinding code? Infuriating.

Why Does Claude Code Make You Start from Scratch Every Time?

Every Claude Code session starts fresh. That’s fine at first, but 1,000+ sessions in, I had weeks of decisions technically on disk and practically unreachable.

Raw truth. Sessions pile up like digital hoarder junk. Readable? Sure. Practical? Laughable.

The dev tried QMD first. Claude-mem next. Both fizzled — too opaque, too cloud-happy. qrec bets on transparency. Local-only. No phoning home to servers that forget anyway.

And get this: the killer app wasn’t planned. Context handoff. Nearing 200-turn limit? Spin up a new session, type “pick up context from the previous session.” Boom. Exact decision retrieved. Context drops to 15%. Magic? Nah. Just smart indexing.

But let’s not kid ourselves. Anthropic could’ve baked memory in. Instead, they leave it to tinkerers. Classic Big AI move — hype the model, outsource the plumbing.

Short paragraphs suck sometimes.

This one’s longer. qrec parses those JSONL beasts, yanks out tool calls, reasoning chains. Hybrid search means keyword hits plus embeddings for the fuzzy stuff. RRF ranks ‘em without bias. Fire a query like “that Redis config from last week,” and it spits relevant chunks. Feed to Claude. smoothly — or close enough.

Eval’s unsolved, admits the post. Fair. Measuring “recall quality” in code history? Tricky. No benchmarks yet. But real-world wins: that handoff. Saved hours.

Is qrec the Fix for AI Coding Amnesia?

Maybe. It’s open source, I assume — post screams indie vibes. Runs local, so privacy win. No vendor lock. But scale it to teams? Shared sessions? Crickets.

Here’s my hot take, absent from the original: this echoes Git’s birth. Early coders emailed patches, lost history in email hell. Git indexed commits locally, retrieved on demand. qrec? Git for AI convos. Predict this: as sessions balloon to millions, every AI code tool needs it. Anthropic ignores at peril.

Dry humor time. Claude “remembers” like my goldfish — up to the bowl’s edge. qrec’s the bigger tank.

Critique the hype. Post gushes handoff. Fine. But 15% context? Still fragile. One bad retrieval, and you’re debugging AI hallucinations. Local’s great — until your drive fries.

Look.

Bets paid off. Transparency over black boxes. Local trumps cloud for devs who trust hardware more than ToS.

What’d they skip? UI polish, maybe. CLI-first feels right for this crowd. But onboard normies? Add a web dash.

Unsolved eval nags. Throw LLMs at it? Rate retrieval relevance? Costs tokens — ironic.

Still, qrec nails the itch. Claude Code without it? Half-baked IDE.

Why Does This Matter for AI Developers?

Tokens cost money. Context limits kill flow. qrec slashes both.

Parallel: remember SVN’s central repo nightmares? Local git repos freed us. qrec localizes AI memory. Bold prediction — forks incoming: team-sync via git, vector stores tuned for code.

Anthropic’s PR spin? “Sessions are ephemeral for freshness.” Bull. It’s laziness. Charge per recall, they’d fix it yesterday.

Dev wager: hybrid search crushes pure semantic. BM25 grounds the embeddings. Smart.

Punchy aside — JSONL parsing? Tedious goldmine.

Real talk. I’ve ditched chatty AIs for this reason. qrec revives Claude. Worth the install.

But test it. Edge cases lurk: massive sessions, multilingual code, evolving projects.

The post teases full details. Tried stuff, bets, eval woes. Dive there if you’re building.

Me? Skeptical optimist. qrec works now. Tomorrow? Depends on Claude’s whims.


🧬 Related Insights

Frequently Asked Questions

What is qrec for Claude Code?

Local tool to index and recall past Claude Code sessions. Searches JSONL history with hybrid engine. No cloud, no costs.

Does qrec work with other AI coding tools?

Built for Claude’s format. Port it? Possible, but embeddings need tweaking.

How do I install qrec?

Check the repo — post links it. Local Python setup, index your ~/.claude/projects/. Query away.

Marcus Rivera
Written by

Tech journalist covering AI business and enterprise adoption. 10 years in B2B media.

Frequently asked questions

What is qrec for Claude Code?
Local tool to index and recall past Claude Code sessions. Searches JSONL history with hybrid engine. No cloud, no costs.
Does qrec work with other AI coding tools?
Built for Claude's format. Port it
How do I install qrec?
Check the repo — post links it. Local Python setup, index your ~/.claude/projects/. Query away.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.