Claude users report up to 30% token waste from context bleed — that’s real cash down the drain.
And here’s the wild part. It’s not just inefficiency. It’s like inviting your therapist to a job interview, or worse, your boss to date night. Your AI sidekick starts whispering the wrong secrets.
Look, Claude’s a beast — this fundamental platform shift, AI as the new OS for thought. But load one repo of markdown notes? Boom. Pollution.
The original sin: a single CLAUDE.md slurps everything. Writing tips mix with salary notes. Essay drafts contaminated by cover letters. Noisy. Invasive. Kills flow.
Why Claude Context Pollution Feels Like Brain Fog
Imagine your brain as a bustling library. Global knowledge — philosophy, writing quirks — sits on the front desk, always open. But job apps? Those hide in the back stacks, dusted off only for résumés.
Claude doesn’t get that. One repo, one mega-context. Suddenly, technical deep-dives dredge up “target salary: $250k.” Weird vibes. Productivity nosedive.
I hit this wall hard. Repo bloated with life docs. Sessions derailed. But AI’s promise? Precision thinking at lightspeed. Pollution mocks that.
The fix? Surgical context. One repo. Tagged files. On-demand assembly.
How I Built the ctx Script to Slay the Beast
Each markdown gets YAML frontmatter:
---
context: job-hunting
title: Active Applications
---
Globals — writing style, think patterns — always in. Others? Opt-in by tag.
TypeScript script, ctx. Run ctx job-hunting. Grabs globals + tagged files. Spits .claude-context.md. Token count? Printed live.
Then: claude --system-prompt-file .claude-context.md
Alias chains it. Friction? Zero.
“The generated file is gitignored. It’s ephemeral by design - the source markdown files are the truth, the generated file is just today’s lens into them.”
That’s the gold. Ephemeral output. Source eternal. Git clean.
But the magic loop — ask Claude post-session: “What context gap hurt us? Rewrite my files.” Boom. Self-improving system. Compounds like interest.
Tokens sting every message. Unlike CLAUDE.md’s selective load. My setups? Under 2k. Yours? Watch it — script warns.
Is This the Future of AI Prompting?
Here’s my bold prediction, absent from the original: this isn’t a hack. It’s the blueprint for AI agents in 2025.
Think early Unix pipes — grep this, awk that. Context as streams. Tag, filter, pipe to model. No more blob prompts.
Anthropic’s PR spins Claude as ‘helpful everywhere.’ Cute. But real work? Modes matter. Code mode. Brainstorm mode. Job mode. Pollution ignores that.
This tool enforces it. Mirrors human cognition — context-switching pro. AI evolves to match.
Historical parallel? CSS in the ’90s. Global styles bled everywhere. Then scoped selectors. Web exploded. Same here: scoped context unlocks AI’s next leap.
Manual? Viable. Copy-paste sections. But friction kills. CLI? Makes precision default.
npx @nbaglivo/ctx –tag essay-thinking –output .claude-context.md
Scans. Merges. Counts. Done.
Why package it? Laziness wins. Skip once, revert to sludge-context. Tool tips scales.
Why Does Token Waste from Context Pollution Matter?
Costs add up. 10 sessions/day, 5k waste? $10/month easy. Scales to teams? Oof.
Speed hit — marginal now, killer at scale.
Deeper: trust. Leaky context? Feels spied-on. Your AI knows too much, wrong things.
Structure clarifies thinking. Tags declare scope. Job info? Not-everywhere. Hygiene win.
Repo evolves. No blob-of-me. Modes reflected.
80 lines TS. Naming convention. Most systems? That simple.
But here’s the futurist fire: AI as platform. Context mgmt = app dev. This script? Your first library.
Tomorrow: VSCode extension. Auto-tag via NLP. Agent swarms sharing contexts.
Pollution’s the old web — tables for layout. This? Flexbox for minds.
Try it. ctx job-hunting. Feel the clarity rush.
The Feedback Flywheel in Action
Session ends. “Claude, what missed? Patch contexts.”
It does. Next run? Richer.
Compounds. Week one: basics. Month? Oracle-level personalization.
No manual toil. AI farms its own soil.
Risk? Over-reliance. But that’s AI life — lean in.
What If You Skip the Tool?
Manual merge. Tedious. Skip. Re-explain every chat. Waste.
Tool flips it. Right-context easy. Wrong? Hard.
Discipline baked in.
🧬 Related Insights
- Read more: Kubernetes 1.35: Taming Wild Kubeconfig Executables with AllowLists
- Read more: Browser-Based Image Converters: Privacy Powerhouse or Dev Overkill?
Frequently Asked Questions
What is Claude context pollution?
It’s when irrelevant notes from your repo leak into AI sessions, bloating tokens and derailing focus — like salary notes in code chats.
How do you fix Claude context pollution?
Use tagged markdown files and a script like ctx to generate session-specific system prompts, loading only globals + chosen tags.
Does ctx work with other AI tools?
Core idea yes — any model taking system prompts. But tuned for Claude’s –system-prompt-file flag.