Bash command fires. Brave Search API coughs up fresh IT headlines. Qwen3.5-4B, that scrappy 4B-param model, gnaws on them in a Docker container — no Anthropic tab running up.
Philippe, Docker’s Principal Solutions Architect, slapped this together. Tired of bleeding credits on Claude for his news fixes. Result? A Docker Agent skill that fetches, analyzes, spits Markdown. Local as hell.
But here’s the rub — it’s slow. Slower than Claude Code, he admits. Your MacBook Air wheezes with the 9B version; drop to 4B and pray.
I wanted a lightweight way to automate my IT news roundups without burning through AI credits. So I built a Docker Agent skill that uses the Brave Search API to fetch recent articles on a topic, then hands the results to a local model running with Docker Model Runner to analyze the stories and generate a Markdown report.
That’s Philippe, straight up. Practical guy. But let’s call the hype: this ain’t revolutionary. It’s Docker doing what Docker does — containerizing drudgery.
Why Chase Local AI When Cloud’s Faster?
Look. Cloud LLMs lap this setup. Claude, Grok, whatever — instant, polished. But credits? They stack like bad debt. Philippe’s fix: Docker Model Runner with Unsloth’s Qwen3.5-4B GGUF. Context window? 262K tokens, if you believe the spec sheet. (Reality: 65K on his rig, with flags tweaked.)
Prerequisites scream “DIY hell”: Docker Compose, Brave API key (free tier, sure), a model that groks function calling. No hand-holding.
He starts with a Dockerfile. Ubuntu base, curl for API pings, yoinks docker-agent binary from the official image. Creates a user, sets perms. Boilerplate, but tight.
Then config YAML. Root agent as “News Roundup Expert.” Skills enabled. Instructions? Make it an IT journalist — software eng, cloud, AI, cyber, open source. Toolset: simple script shell. Bash any command, snag stdout/stderr. No bespoke tools bloating the prompt.
Models section points to DMR, Hugging Face Unsloth Qwen, temp 0.0, top_p 0.95, presence_penalty 1.5. Llama.cpp flags for context. Precise. Pedantic, even.
Skill lives in .agents/skills/news-roundup. Fetches news, enriches with web scrapes? Original cuts off, but you get it: structured Markdown out.
Docker Agent’s Skill Trick — Clever or Gimmick?
Skills. Docker Agent’s party piece. Invoke ‘em like functions, but agentic. Prompt: “use news roundup skill with tiny language models.” Boom — brief.
Philippe’s twist: script toolset sidesteps heavy tooling. Shell it. Flexible. Repeatable. Save that report, tweak, reuse.
Skeptical eye, though. Local models? They’re the Yugo of AI — cheap, runs anywhere, guzzles gas (your GPU/CPU). Qwen3.5-4B? Solid for reasoning, function calls. But on consumer hardware? Expect waits. Coffee break mandatory.
Unique angle: this echoes Docker’s origin story. 2013, Solomon Hykes containers a Python app to dodge VM bloat. Now? Containerizing AI inference. Bold prediction — Docker Agent skills become the Podman of workflows: offline-first, enterprise-compliant, no vendor lock. While OpenAI feasts on subscriptions, Docker starves ‘em out.
Corporate spin? Docker’s not hyping “game-changing” here — Philippe’s blog is nuts-and-bolts. Refreshing. No vaporware.
Step-by-step? He lays it bare.
Dockerfile: multi-stage, platform-aware. Copies agent binary. Installs wget/curl. Cleans apt caches like a pro.
Config: agents.root.model = brain (Qwen alias). skills: true. Instructions verbose, role-deep.
Toolset script: bash -c “$command” 2>&1. Captures everything.
Skill dir: news-roundup. API calls, parse JSON, feed model. Markdown magic.
Run it? docker-compose up, tweak env for Brave key, prompt away.
The Offline Grind: Tradeoffs Exposed
Pros: Zero credits. Private — no data to cloud. Repeatable script. Docker portability — spin on any rig.
Cons: Sloooow. 4B model on Air? Fine for briefs, choke on deep dives. Brave free tier limits? Hit ‘em quick.
Compared to RSS feeders or Perplexity? Cruder. But agentic — reasons, invokes, adapts. That’s the hook.
Philippe tested 9B first — too pokey. 4B wins. Smart pivot.
Want in? Grab Brave key. docker/docker-agent:1.32.5. Unsloth Qwen GGUF. Compose file (implied). Prompt: news on Kubernetes drama.
Out: Trends, impacts, context. Journalist polish, local sweat.
Is it for you? If you’re credit-phobic, Docker diehard — yes. Else? Stick to browser tabs.
Will Docker Agent Dominate Local AI Workflows?
Short answer: maybe. Skills unlock repeatability — think cron jobs with brains. Pair with Model Runner’s Ollama vibes? Docker owns the edge stack.
History parallel: npm scripts containerized Node chaos. This? AI skills containerized prompt spaghetti.
But beware PR gloss. Docker’s pushing agents hard — watch for bloat.
Try it. Fork the repo (assuming one’s there). Tweak for your beat: cyber threats, whatever.
FAQ
How do I set up Docker Agent for news roundups?
Grab Docker Compose, Brave API key, pull Qwen3.5-4B via Model Runner. Follow Philippe’s Dockerfile, config YAML, skill dir. docker-compose up – your prompt away.
Is Docker Model Runner worth the slowdown?
For credit savings and privacy? Yes. Speed demons: no. 4B model hits sweet spot on laptops.
What’s a Docker Agent skill exactly?
Agent-invokable function: fetch news, parse, summarize. Script-based here — flexible as hell.
🧬 Related Insights
- Read more:
- Read more: Cloudflare’s Workers AI Ignites Agents with Kimi K2.5’s Massive Brainpower