Picture this: the AI world buzzing with RAG hacks, everyone shoveling documents into chatbots like coal into a steam engine, hoping for gold on every query. Rediscover the wheel, every single time. That’s what we expected—endless retrieval, no retention. But Andrej Karpathy drops his ‘LLM Wiki’ gist, and bam. A persistent, LLM-maintained markdown wiki. Obsidian vault meets Claude Code. Agent does the grunt work. You think big.
And here’s the game-changer: it’s not just a pattern. It’s the Memex reborn—Vannevar Bush’s 1945 dream of associative trails through knowledge, finally solved because LLMs don’t flake on maintenance.
I tried Karpathy’s local setup. Months in. It sings… until it doesn’t.
Why Did Local LLM Wikis Feel Like a Trap?
RAG? It’s rediscovering fire with every match. Dump docs, query, repeat—synthesis tax paid anew, every damn time. A wiki? Compounds. Links pre-built. Contradictions flagged upfront. Knowledge sticks.
But humans suck at upkeep. Ten pages to tweak from one new source. Spotting that fresh article nuking your old notes? Eyes glaze, wiki dies. LLMs? Tireless janitors.
Karpathy nailed it. Yet local? Three frictions piled high.
One machine rules. Phone at in-laws sparks an idea—can’t add it. Stuck.
One LLM island. Claude edits, ChatGPT blind, phone app oblivious. Funnel everything through the terminal god.
Sharing? Git repo handover, sure. But a living, queryable brain? Nope. Dead on arrival.
Friction kills habits. Three layers? Funeral.
“I ran the local version for months. Obsidian vault, Claude Code in a terminal, a CLAUDE.md schema, a log file, the whole thing. It works. It also has three problems that compound.”
That’s the raw truth from the original post. Spot on.
But wait—my bold prediction: shared wikis like this birth ‘team brains.’ Not personal vaults. Companies ditching Notion for LLM-curated hives, where your Claude adds, my Cursor queries, contradictions auto-flagged. Collaboration? Telepathic.
What Makes Hjarni the LLM Wiki Upgrade Everyone Needs?
Hjarni: Karpathy’s dream, hosted. MCP-exposed. Any LLM client reads, writes. smoothly.
Capture thought in phone Claude app. Refine in Claude Code mid-hack. Query from Cursor next week. Same notes. Tags. Links. Zero sync hell.
No ‘open terminal ritual.’ Just talk to your LLM du jour—it pipes to Hjarni.
Pro seats? Humans plus LLMs share one brain. Magic.
Tradeoffs? Honest:
No git history. (Branchers, stick local.)
No Obsidian graphs. (Miss those force-directed beauties? Fair.)
Database, not folders. Grep fans, oof.
No plugin party. Focused laser over marketplace sprawl.
If those sting—Karpathy local wins. Terminal dwellers, Obsidian obsessives, git purists: stay put.
But everywhere access? Phone, ChatGPT, Cursor—Hjarni.
Look, RAG’s like querying a haystack eternally. Wiki’s a garden—LLM prunes, you harvest.
And that Bush nod in Karpathy’s gist? Chills. Memex faltered on human drudgery. AI cracks it. We’re building external brains, folks. Platform shift. Knowledge isn’t hoarded—it’s alive, associative, shared.
Why Does Karpathy’s LLM Wiki Matter More Than You Think?
Forget hype. This skewers corporate spin: ‘Just use our vector DB!’ No. Persistent synthesis trumps raw retrieval. Compounding beats combustion.
Local works for hermits. Hosted? Unlocks ubiquity.
Imagine: dev team with one wiki. Bug hits, Cursor pulls context, Claude flags fix, phone notes the war story. No Slack sludge.
Or solo: morning coffee ChatGPT riff, afternoon Cursor code dive—context carries.
It’s wonder-fuel. AI as brain extension, not query toy.
Energy here? Electric. Pace? This shifts paradigms.
Downsides real, but fixable. Git export coming? Who knows. Point: choose your friction.
The move’s clear. Ditch doc-dumps. Build brains.
Whether markdown grind or Hjarni hum—same future.
🧬 Related Insights
- Read more: Apollo 11’s Dormant Bug: The Guidance Computer Glitch That Never Woke Up
- Read more: Stamp It: Mandating Version Disclosure for Every Program
Frequently Asked Questions
What is Karpathy’s LLM Wiki?
A pattern where an LLM agent maintains a persistent markdown wiki of knowledge, instead of RAG-rederiving everything per query. Obsidian + Claude Code does the heavy lifting.
How does Hjarni improve on local LLM Wiki?
Hosts it cloud-side via MCP, so any LLM (Claude, ChatGPT, Cursor) accesses from anywhere—no machine limits, no sync, shared brains for teams.
Is Hjarni better than Obsidian for AI knowledge management?
If you want cross-LLM, everywhere access—yes. Trade git/filesystem/plugins for zero-friction ubiquity. Obsidian shines for local purists.