Skrun: Turn AI Skills into APIs Fast

You've got a folder full of SKILL.md files from Claude or Copilot. Useless relics — until Skrun. This open-source runtime turns them into POST endpoints devs can actually use.

Skrun Unlocks Your Forgotten AI Skills as Production APIs—No Frameworks Required — theAIcatchup

Key Takeaways

  • Skrun turns trapped SKILL.md files into callable APIs with zero framework learning.
  • Multi-provider fallbacks and tool integration make agents production-resilient.
  • Like Docker for AI: unlocks sharing, hints at coming agent marketplaces.

Imagine this: you’re a dev with a drawer of half-baked AI skills, those SKILL.md files from tinkering with Claude Code or Copilot. They’re smart, sure — they review code, lint JS, query databases — but trapped. No webhooks. No product integration. Just digital hoarding.

Skrun changes that. Overnight.

For real people — solo devs, indie hackers, engineering teams — it means resurrecting experiments as live APIs. Point at your skill folder, run skrun init --from-skill ./my-skill, tweak agent.yaml, deploy. Boom: POST /api/agents/dev/my-skill/run. Curl it. Wire it into Zapier. Suddenly, your AI side project powers a SaaS.

Here’s the thing. We’ve been here before. Remember pre-Docker days? Apps glued to specific servers, skills siloed in tools like Anthropic’s playground. Skrun? It’s the Docker for AI agents — my bold call, absent from the original post. Containerize skills once, run anywhere: local, soon cloud VMs. No lock-in to one LLM playground.

Why Were Your AI Skills Gathering Dust?

Skills emerged from agentic AI hype — Claude’s Artifacts, Copilot’s custom agents, OpenAI’s GPTs. Great for one-offs. Lousy for production.

Each platform hoards them. Anthropic won’t expose your Claude skill as JSON API. Copilot? Stuck in VS Code. You hack wrappers, rewrite in LangChain (ugh, another framework), or abandon ship.

Skrun reads SKILL.md natively. Parses description, tools, inputs. Spits out agent.yaml with your config. No migration pain.

take a SKILL.md → get a POST /run endpoint No new framework to learn. No infrastructure to set up. Just point at a skill, configure the model, and deploy.

That’s the creator’s words. Spot-on. But dig deeper: it’s architecturally clever. A runtime layer abstracts the mess — model providers, tool execution, state persistence — into a thin API gateway.

How Skrun Actually Works (The Guts)

Boot it local: npm install -g @skrun-dev/cli, clone repo, pnpm dev:registry. Set GOOGLE_API_KEY in .env.

Init from skill. Edit agent.yaml:

Pick Gemini-2.5-flash primary, GPT-4o fallback. Smart — LLMs flake; redundancy rules.

Tools? Two flavors. Bundle scripts/ dir: shell, Node, Python. Declare in YAML, LLM invokes, Skrun execs.

Or MCP servers — npm’s Model Context Protocol ecosystem. npx @playwright/mcp --headless for browser control. Agent browses, persists KV state across calls. Run code review twice? It remembers prior issues.

Deploy: skrun deploy. Localhost:4000 spins up. Auth via Bearer token. JSON in: code snippet. JSON out: score, issues, review.

curl -X POST http://localhost:4000/api/agents/dev/code-review/run \
-H "Authorization: Bearer dev-token" \
-H "Content-Type: application/json" \
-d '{"input": {"code": "function add(a,b) { return a + b; }"}}'

Output? 60 score, ESLint nags, error-handling critique. Production-ready.

Four packages: schema, cli, runtime, api. 154 tests. MIT. v0.1 local-only; cloud, streaming, hub incoming.

But — and this is my critique — agent.yaml’s I/O contract? Solid start, but rigid. Inputs as flat JSON? Fine for code review. Clunky for graphs, files. Creator seeks feedback; expect evolution.

Is Skrun Ready to Ship Your Agents?

Short answer: yes, for prototypes to mid-scale. Local registry scales to teams via dev tokens.

RuntimeAdapter interface screams extensibility — sandbox VMs for cloud (Fly.io? Render?). Caller API keys next — no shared .env.

Why it matters for devs: decouples skills from UIs. Build once, plug into Streamlit, Vercel, internal Slack bots. Multi-provider? Cost-optimize: Groq for speed, Mistral cheap, Anthropic quality.

Historical parallel: npm in 2010. JS libs trapped in repos. npm? Instant sharing, explosion of tools. Skrun does that for agents. Expect a hub — shareable, forkable skills marketplace.

Risks? Tool execution sandboxing lags (local now). State persistence? KV only — no vectors yet. But v0.1 ships what others promise.

Teams hoard custom agents: code review, bug triage, API docs. Skrun APIs them. Webhook CI/CD. Agent swarm orchestration.

Indies? Monetize skills. POST endpoint → Gumroad product.

Skeptical take: not every skill API-fies well. Simple prompts? Overkill. Complex agents? This shines.

The Bigger Shift: Agents Escape the Playground

AI’s moving. From chat UIs to composable APIs. LangGraph, CrewAI force YAML hell. Skrun? Minimal YAML, skill-native.

Architectural win: provider-agnostic. YAML swaps models. Fallbacks auto-heal outages.

Prediction: in 6 months, Skrun hubs skill marketplaces. Like Replicate for models, but agents. DevTools Feed watches.

Try it. Feedback shapes v1.


🧬 Related Insights

Frequently Asked Questions

What is Skrun and how does it work?

Skrun’s open-source runtime converts SKILL.md files from Claude/Copilot into REST APIs. Init, configure models/tools in agent.yaml, deploy — get POST endpoints with auth.

How do I deploy an existing AI skill with Skrun?

skrun init --from-skill ./path/to/skill, edit agent.yaml (models, tools), skrun deploy. Hits localhost:4000. Curl JSON input, get structured output.

Does Skrun support multiple LLM providers?

Yes — Anthropic, OpenAI, Google, Mistral, Groq. Set primary + fallbacks in YAML. Tools via scripts or MCP (e.g., Playwright).

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

What is Skrun and how does it work?
Skrun's open-source runtime converts SKILL.md files from Claude/Copilot into REST APIs. Init, configure models/tools in agent.yaml, deploy — get POST endpoints with auth.
How do I deploy an existing AI skill with Skrun?
`skrun init --from-skill ./path/to/skill`, edit agent.yaml (models, tools), `skrun deploy`. Hits localhost:4000. Curl JSON input, get structured output.
Does Skrun support multiple LLM providers?
Yes — Anthropic, OpenAI, Google, Mistral, Groq. Set primary + fallbacks in YAML. Tools via scripts or MCP (e.g., Playwright).

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.