Agent stares at your blog post. ‘SEO review?’ it thinks. No massive prompt dump. Instead, it loads a skill. Boom—title tags, meta descriptions, keyword stuffing rules. All on demand.
That’s ADK’s SkillToolset in action. Developer’s guide to building ADK agents with skills? Sure. But let’s cut the fluff. This isn’t revolutionary—it’s a smart fix for a dumb problem: token waste.
Bloated Prompts Are Killing Your Budget
Most agents? Glorified copy-paste machines. Devs cram compliance rules, API docs, style guides into one fat system prompt. Fine for two tasks. Scale to ten? You’re burning thousands of tokens per call. Every. Single. Time.
Even if the query’s about cat memes.
ADK flips it. Progressive disclosure. Load context when needed. Starts with 1,000 tokens of L1 metadata, not 10,000. Ninety percent savings. Here’s the original pitch:
The SkillToolset achieves this through progressive disclosure. This architectural pattern allows agents to load context precisely when it is needed, rather than cramming thousands of tokens into a monolithic system prompt.
Smart. But is it new? Nah. Echoes old plugin systems in IRC bots—load modules on chat. Agents just got modular brains.
Your agent gets three tools: list_skills (L1 peek), load_skill (L2 instructions), load_skill_resource (L3 deep dive). Straightforward.
Inline Skills: The No-Brainer Starter
Simplest pattern. Python object. Name, description, instructions. Hardcode it.
Like this seo_skill beast:
seo_skill = models.Skill(
frontmatter=models.Frontmatter(
name="seo-checklist",
description="SEO optimization checklist for blog posts...",
),
instructions="When optimizing... check each item: 1. Title..."
)
L1 metadata always visible. L2 loads on ‘hey, review this post.’ Agent ticks boxes systematically. Perfect for stable checklists. Security reviews. Compliance nagging.
But static. Change it? Rewrite code. Yawn.
File-Based Skills: When Checklists Need Backup
Upgrade. Directory time. SKILL.md with YAML frontmatter, plus references folder.
skills/blog-writer/ ├── SKILL.md # Steps to follow └── references/ └── style-guide.md # The meat
Agent reads L2 instructions: ‘Follow these steps, grab style-guide when needed.’ Calls load_skill_resource. Precise.
blog_writer_skill = load_skill_from_dir(pathlib.Path(__file__).parent / "skills" / "blog-writer")
Reusable. Versionable. No more prompt surgery.
Here’s my hot take—the unique bit: This mirrors 90s AOL chatbot plugins. Back then, scripters swapped .uue files for new tricks. ADK? Same vibe. Bold prediction: Agent skill marketplaces explode by 2025. HuggingFace for behaviors. Google’s already seeding with npx skills add google/adk-docs.
But corporate spin alert—‘dynamically expanding capabilities’ sounds godlike. It’s just lazy loading, folks. Effective, sure. Hype? Overkill.
Why Does External Skills Pattern Feel Like Cheating?
Pattern three. Same API. Different source.
Download from awesome-claude-skills repo. Or community hubs. load_skill_from_dir doesn’t care.
content_researcher_skill = load_skill_from_dir(
pathlib.Path(__file__).parent / "skills" / "content-research-writer"
)
Universal spec at agentskills.io. Plug-and-play. Your agent becomes a skill hoarder.
Four patterns build here—inline to external to generated. Culmination? Agent that writes new skills at runtime. ‘Need data pipeline validator? Generate it, load it, run it.’
Skeptical? Test it. Token bills drop. But watch for skill conflicts—agent loads wrong one, hallucinates chaos.
Is ADK’s Progressive Disclosure Actually Better Than RAG?
RAG fans, pipe down. Retrieval-Augmented Generation pulls docs willy-nilly. ADK? Structured. L1/L2/L3 ladder. Agent decides relevance first.
By using this architecture, an agent with 10 skills starts each call with roughly 1,000 tokens of L1 metadata instead of 10,000 tokens in a monolithic prompt.
Proof in numbers. RAG embeddings? Vector search noise. Skills? Curated, prompt-native.
Downsides? Skill authoring grind. Bad SKILL.md? Garbage in, garbage agent. And runtime generation—risky. LLM writes its own rules? Recipe for drift.
Still, for devs building production agents: Game on. Ditch monoliths.
The Real Power: Agents That Evolve
Picture this. User: ‘Audit my pipeline.’ Agent: No skill? Generate one via meta-prompt. Load. Execute.
Workflow: list_skills → decide → load_skill → load resources → apply.
Scales to armies. Compliance teams? Custom validators on fly. Marketers? SEO kits per vertical.
Dry humor break: Finally, agents that don’t need babysitters. Or do they? One rogue skill, and your agent’s preaching flat-earth compliance.
Unique insight payoff: Like Linux kernel modules—hotplug expertise without reboot. ADK ports that to AI. Expect enterprise lock-in via proprietary skills. Open spec saves it.
Building Your First ADK Agent: Quick Wins
Grab ADK. Define skills. Wire SkillToolset.
Start inline. Graduate to files. Hunt externals.
Pro tip: Version SKILL.md with Git. Track drifts.
Tokens saved: 90%. Speed: Up. Sanity: Yours.
But here’s the barb—most devs won’t bother. ‘Prompt engineering’s fine,’ they say. Until costs spike.
🧬 Related Insights
- Read more: AI-First Apps Obliterate Frontend’s Sacred Rules
- Read more: GitHub Copilot CLI Meets Local LLMs: Control at a Cost
Frequently Asked Questions
What is ADK SkillToolset?
Tool for loading agent skills on demand via progressive disclosure—L1 metadata, L2 instructions, L3 resources. Cuts token bloat 90%.
How do you build skills for ADK agents?
Inline Python objects for basics. Directories with SKILL.md for files. Download externals via agentskills.io spec.
Can ADK agents generate new skills at runtime?
Yes—prompt LLM to create SKILL.md, load it instantly. Risky, powerful for dynamic expertise like audits or validators.