Skrun: Turn Agent Skills Into APIs

A developer got tired of agent skills being trapped inside their original tools. So they built Skrun—a runtime that turns SKILL.md files into production-ready APIs with zero boilerplate. Here's why this matters.

One Developer Just Freed Agent Skills from Their Walled Gardens—and It Changes Everything — theAIcatchup

Key Takeaways

  • Skrun is an open-source runtime that converts Agent Skills (SKILL.md files) into production-ready REST APIs—freeing agents from walled gardens
  • It supports multiple LLM providers with automatic fallbacks, lets agents call external tools via custom scripts or MCP servers, and maintains stateful context across invocations
  • This represents a larger shift toward portable, open-source agent infrastructure—similar to how Docker and Kubernetes freed developers from vendor lock-in

You open Claude Code, build something clever, save it as a SKILL.md file, and then… nothing. It sits there. Trapped. You can’t call it from your product. You can’t trigger it from a webhook. You can’t let another service invoke it with a POST request.

That’s the problem one developer got tired of seeing. So they did what open-source builders do—they fixed it themselves. Enter Skrun, a runtime that takes those abandoned Agent Skills and turns them into callable APIs. No framework to learn. No infrastructure ceremony. Just point at a skill, configure the model, and deploy.

This is the kind of unglamorous, pragmatic tool that doesn’t make headlines but changes how people actually work.

The Walled Garden Problem Nobody Talks About

Here’s the thing about AI tooling right now: it’s everywhere, but it’s isolated. Claude Code creates skills. Copilot generates them. ChatGPT has its own agent format. They’re all solving the same problem—helping humans automate tasks—but they’re doing it inside locked boxes.

You build an agent that knows how to review code. It’s brilliant. It catches edge cases, flags security issues, explains its reasoning. But it only exists inside Claude. Want to plug it into your CI/CD pipeline? Good luck. Want to make it available to your team via a REST endpoint? You’re rebuilding from scratch.

“Each skill was trapped inside the tool that created it.”

That’s not just inconvenient—it’s a waste of creative work. And it’s the exact kind of friction that open-source tools exist to sand down.

How Skrun Actually Works (It’s Refreshingly Simple)

The workflow is almost embarrassingly straightforward. You point Skrun at an existing SKILL.md file:

skrun init --from-skill ./my-existing-skill
skrun deploy

It reads your skill, generates an agent.yaml config, validates everything, and spins up a local API endpoint. Then you call it like any other service:

curl -X POST http://localhost:4000/api/agents/dev/code-review/run \
  -H "Authorization: Bearer dev-token" \
  -d '{"input": {"code": "function add(a,b) { return a + b; }"}}'

Back comes structured output. The agent ran. It thought. It returned results.

What makes this design sing isn’t the simplicity alone—it’s the flexibility underneath. You’re not locked into one LLM provider. Skrun lets you declare your preferred model in agent.yaml, with fallbacks:

model:
  provider: google
  name: gemini-2.5-flash
  fallback:
    provider: openai
    name: gpt-4o

Google flakes out? It quietly switches to OpenAI. That’s the kind of boring, essential engineering that makes systems actually reliable.

Two Ways to Give Your Agents Superpowers

Here’s where Skrun gets interesting. An agent without tools is just a chatbot. But Skrun gives you two paths to extend what your agents can actually do.

First: bundle your own CLI tools. Write shell scripts, Python, Node—whatever. Declare them in agent.yaml, and the agent calls them when it needs to. Want an agent that runs ESLint, queries your database, hits an internal API? Done. The LLM decides when the tool is needed. Skrun executes it. The result flows back.

Or use MCP servers. Model Context Protocol is becoming the standard way to connect agents to external systems. Skrun works with any MCP server from the npm ecosystem. Spin up a Playwright browser agent. Connect to your file system. Each agent becomes a conductor orchestrating multiple external services.

There’s also stateful persistence. Run the same agent twice, and it remembers. Context carries over. That’s crucial for agents that need to track progress, maintain conversations, or coordinate across multiple invocations.

Why This Matters More Than It Looks

Skrun is small. It’s currently v0.1, deployed locally, with a straightforward feature set. But zoom out, and this is a glimpse of something bigger.

AI agents are becoming how we automate work. They’re getting better, cheaper, and more specialized every quarter. But they’ve been locked inside proprietary tooling. Skrun is one of the first open-source bridges that says: you own your agents. You can deploy them anywhere. You can chain them together. You can integrate them into your infrastructure.

Think about the historical parallels. Docker did this for containers. Kubernetes did it for orchestration. Open-source tools that freed you from vendor lock-in, gave you portability, and let you build on top of standards. Skrun isn’t Kubernetes-scale yet—but it’s operating in that same spirit.

The creator is also thinking ahead. Cloud deployment is coming (the architecture has a RuntimeAdapter interface ready). Caller-provided API keys. Streaming responses. An agent hub so you can discover and share. This isn’t just a local utility. It’s the scaffold for an ecosystem.

What’s Actually Here (and What Isn’t)

Let’s be precise. Skrun is 4 npm packages, 10 CLI commands, 154 tests, 6 demo agents, and MIT-licensed. It’s open source. It’s real. You can clone it, install it, run it right now.

It’s not a production-grade platform yet. But it’s also not vaporware. The code is clean, the design is thoughtful, and the problem it solves is legitimate.

The roadmap matters too. Cloud deployment would remove the local-only limitation. Better API key handling would make this suitable for teams. A hub would turn scattered agents into a discoverable library. These aren’t massive features—they’re the kind of polish that turns a useful tool into an essential one.

The Bigger Picture: Agent Portability Is Now a Thing

We’re at an inflection point. Agent skills are being created faster than anyone expected. But they’re scattered across tools. Skrun is saying: let’s standardize on SKILL.md, build an open-source runtime, and give every developer the ability to deploy agents anywhere.

Is it going to topple Anthropic or OpenAI? No. Will it become essential infrastructure for teams building multi-agent systems? Quite possibly.

The real win here is philosophical. It’s about moving AI agents from “clever toys inside corporate products” to “portable, standardized, open systems you can actually integrate into your work.” That shift—that’s what matters.



🧬 Related Insights

Frequently Asked Questions

What does Skrun actually do? Skrun takes Agent Skills (SKILL.md files from Claude, Copilot, etc.) and converts them into callable REST APIs. You point it at a skill, it deploys a local endpoint, and any service can trigger the agent via POST request.

Do I need to rewrite my existing skills? No. Skrun reads your existing SKILL.md file directly. The import flow parses the skill definition and generates agent.yaml automatically. No rewriting required.

Can I use Skrun with different AI models? Yes. You declare your preferred provider (Anthropic, OpenAI, Google, Mistral, Groq) in agent.yaml, and Skrun supports fallbacks. If your primary provider fails, it switches to the fallback automatically.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

What does Skrun actually do?
Skrun takes Agent Skills (SKILL.md files from Claude, Copilot, etc.) and converts them into callable REST APIs. You point it at a skill, it deploys a local endpoint, and any service can trigger the agent via POST request.
Do I need to rewrite my existing skills?
No. Skrun reads your existing SKILL.md file directly. The import flow parses the skill definition and generates agent.yaml automatically. No rewriting required.
Can I use Skrun with different AI models?
Yes. You declare your preferred provider (Anthropic, OpenAI, Google, Mistral, Groq) in agent.yaml, and Skrun supports fallbacks. If your primary provider fails, it switches to the fallback automatically.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.