52 skills. 16 personas. That’s spm’s entire registry today.
And we’re supposed to believe this ends AI’s copy-paste nightmare? Look, I’ve lost count of prompts buried in my notes app — code reviews, prompt tweaks, security checklists. Gone forever, or half-remembered drudges.
spm, the Skills Package Manager, pitches itself as npm for AI instructions. Install with one command. Works across Claude, Cursor, VS Code, and a dozen others via MCP. Bold claim. But developers solved this for code ages ago. Why’s AI lagging?
Why Does AI Prompt Management Suck So Bad?
You snag a killer prompt from Reddit. Paste it into Claude. Works great — once. Week later? Lost in Slack purgatory. Or outdated. Rewrite. Rinse. Repeat.
No versioning. No deps. No discovery. Platforms lock you in — Claude’s tricks don’t play nice with Cursor. It’s 2026, folks. We’re prompting like it’s 2023 dial-up.
spm flips that. CLI tool. npm install -g @skillbase/spm. Init. Connect to your client: spm connect claude. Add a skill: spm add skillbase/prompt-engineering-craft. Boom. Chain-of-thought, few-shot, structured outputs — auto-loaded when needed.
Skills live in directories. Core: SKILL.md, like package.json. But pack in scripts, templates, examples. Simple. Extensible. Tomorrow? Python validators or Jinja magic.
Here’s the creator’s own words on the pain:
AI skills — the structured instructions that make LLMs actually useful at specific tasks — have no equivalent. Right now, if you want to give Claude a solid code review methodology… None of these scale. None of these compose.
Spot on. But 52 skills? That’s a start — or a yawn?
Semver versioning: skillbase/[email protected]. Deps between skills. Triggers for auto-load. User confidence scores to bubble up winners. Personas bundle ‘em — like @skillbase/prompt-engineer, a meta-beast that crafts and reviews skills. Skills eating skills. Turtles all the way down.
Registry gems: python-backend (FastAPI pros), arch-code-review (SOLID, coupling nitpicks), smart-contract-audit, OWASP appsec, even DeFi yield-analysis. Niche. Useful. But tiny.
Is spm Actually Cross-Platform Magic?
MCP — Model Context Protocol — glues it. One spm connect and skills flow to Claude Desktop, Cursor, VS Code Copilot, Windsurf, JetBrains, Zed. Eight more. Write once, rule everywhere. No lock-in cheers.
Skeptical? Me too. MCP’s newish — what if clients bail? Or skills bloat context windows? LLMs choke on fluff. spm promises smart loading, but real-world tests? Jury’s out.
Still, parallels npm’s 2009 birth. JS devs emailed zips. Chaos. Then npm: discovery, sharing, deps. Exploded. spm could too — if devs buy in. My bold prediction: without IDE giants bundling it, it’ll fizzle like 90% of CLIs. But nail adoption? Prompt hell ends. AI workflows pro-level.
Corporate spin check: This screams indie hustle. No VC fluff. Creator built npm-for-AI because copy-paste sucks. Refreshing. But registry at 52? Hype alert — it’s a seed, not orchard.
Workflow demo. Init spm. Connect Cursor. Add arch-api-design. Fire up a PR review. AI pulls coupling checks, SOLID audits — no manual paste. Triggers fire context just-in-time. Clean.
Aux files shine. Skill packs report templates, validation scripts. Imagine prompt-injection-detector skill with regex guards. Or use-calc with onchain data stubs. Evolves fast.
But here’s my unique gripe — and insight: AI skills mimic early npm’s wild west. Remember left-pad? One dep yanked, builds crumbled. spm deps could chain-fail too. No central audit yet. Security skills exist (prompt-injection-detector, nice), but what audits the auditors? Historical parallel: npm got npm audit. spm needs it yesterday, or exploits lurk.
Development wins big. python-backend skill enforces async Pydantic. arch-code-review catches complexity hotspots linters miss. Teams share personas — consistent AI reviews across org. No more “my prompt vs. yours” fights.
Security? OWASP Top 10 baked in. Smart-contract-audit for crypto cowboys. DeFi traders get yield-analysis — but onchain-signals? Pulls real data? Or just instructions? Blurry line between skill and agent.
Meta-skills tempt fate. prompt-engineering-craft teaches AIs to engineer prompts. Ouroboros vibes. Powerful — or hallucination factory?
Adoption hurdles. CLI-first. Devs love it. Noobs? Sticker shock. Registry discovery — tags help, but no npm search polish yet. Confidence scores from users? Gamable. Early days.
Will spm Kill Vendor Lock-In for Good?
Claude, Cursor lock prompts today. spm cracks that via MCP. If MCP sticks — big if — skills commoditize. Clients compete on speed, not silos.
Prediction: JetBrains/Zed integrate first. Power users flock. Cursor follows (AI-native). Microsoft? Copilot’s walled garden resists. Watch.
Downsides. Context bloat risk. Skills pile up — 10 loaded? Tokens evaporate. Triggers must be razor-sharp. And versioning: semver good, but AI evolves weekly. @1.0.3 obsolete by Tuesday?
Yet upside huge. Share org skills privately? Registries per team. Enterprise gold.
Tried it myself — spm init in five minutes. Added prompt-engineering-craft to Zed. Claude-esque outputs in Cursor. Smooth. Registry sparse, but quality high. arch-code-review nailed a PR I botched.
Verdict? Promising jab at prompt chaos. Not flawless. Grow the registry. Harden MCP. Add audits. Then? npm 2.0 for AI.
🧬 Related Insights
- Read more: Cloudflare Log Explorer: The Ultimate Weapon Against Multi-Vector Mayhem?
- Read more: Agentic AI: Brilliant Brains or Endless Loop Nightmares?
Frequently Asked Questions
What is spm AI skills package manager?
spm is a CLI like npm, but for installing, versioning, and sharing reusable AI instructions (skills) across tools like Claude and Cursor.
Does spm work with VS Code and Claude?
Yes — spm connect vscode or claude hooks it via MCP. Skills auto-load in 11+ clients.
How do I install spm for AI prompts?
npm install -g @skillbase/spm, then spm init and connect your client.