Your agent’s humming along, crunches through an expensive LLM analysis on that draft report, then hits pause for human approval. Resume. Boom—two hours of GPU time down the drain because LangGraph decided to re-run the whole node from scratch.
That’s the selectools pitch, right there. I’ve seen this circus before—20 years chasing Silicon Valley’s agent hype cycles. But this open-source Python framework (Apache-2.0, pip install selectools) claims to thread the needle: multi-agent graphs, tool calling, RAG, 50 evaluators, PII redaction. All in. Supports OpenAI, Anthropic, Gemini, Ollama. No vendor lock-in nonsense.
Look, agents are the new black—everyone’s building them. But most frameworks leak like sieves. Selectools? It’s got a generator trick for interrupts that doesn’t make you contort your code into idempotent pretzels.
Why LangGraph’s Interrupts Are a Pain (And Selectools Fixes It)
LangGraph’s interrupt() re-executes the node body on resume. By design, sure—checkpoint-replay model. Official fix? Idempotent side effects, shove expensive stuff post-interrupt, or split nodes. It’s workable, but every human-in-the-loop spot turns into architecture gymnastics. Leaky as hell.
Selectools yields an InterruptRequest from a generator. Resumes exactly at yield via generator.send(). That pricey analysis? Runs once. No resets.
“Resumes at the exact yield point (LangGraph restarts the whole node).”
—from the v0.18.0 changelog. They call it out directly. Here’s the code:
async def review_node(state):
analysis = await expensive_llm_analysis(state.data["draft"]) # runs once
decision = yield InterruptRequest(prompt="Approve?", payload=analysis)
state.data["approved"] = (decision == "yes") # resumes here
Clean. Pythonic. No “make everything idempotent” boilerplate.
I’ve covered this before—back in 2018, early RL frameworks forced similar hacks for state persistence. PyTorch won by ditching static graphs for dynamic ones. Selectools feels like that: generators keep it fluid, no rigid replays.
Multi-Agent Graphs Without the DSL Bloat
AgentGraph: directed graph for agent nodes. Routing? Plain Python functions. No learned routers, no proprietary DSLs. Why? Production wants deterministic flow—LLMs reason inside nodes, not hijack topology.
ContextMode options kill token bloat: LAST_MESSAGE (default), LAST_N, FULL, SUMMARY, CUSTOM. Downstream agents won’t drown in upstream chit-chat.
Parallel exec with MergePolicy (LAST_WINS, FIRST_WINS, APPEND). Fan-out, fan-in, done.
Loop detection via state hashing—stalls if nothing changes. Smart.
But here’s my cynical take: who’s making money? John Nichev, solo dev? GitHub stars don’t pay rent. This screams “enterprise-ready open source” to lure talent or get acquired. Remember how Haystack (RAG framework) got scooped? Same vibe.
SupervisorAgent: Four Ways to Herd Cats
Four strategies, no fluff.
| Strategy | Description | Best for |
|---|---|---|
| plan_and_execute | LLM JSON plan, sequential exec | Structured tasks |
| round_robin | Turns, supervisor checks | Iterative refinement |
| dynamic | LLM picks agent per step | Heterogeneous tasks |
| magentic | Magentic-One ledgers + replan | Autonomous research |
Magentic rips off Microsoft Research’s pattern—task/progress ledgers, auto-replan. ModelSplit: big models plan, cheap ones execute. 70-90% cost slash.
Solid. But dynamic routing? That’s where costs explode if your LLM hallucinates loops. Selectools’ hashing saves you—again, production-minded.
Tool Calling, RAG, Evals—All Baked In
Tool calling: standard, but integrated smoothly.
RAG? Vector stores, retrieval chains—pip and go.
50 evaluators out-of-box. 30 deterministic, rest LLM-judged. No paid services. PII redaction too—regex + LLM scrubbers. Privacy by default, finally.
In 2005, I watched Hadoop bundle everything for big data. Fragmented tools died. Selectools is that for agents: one pip, no glue code hell.
My bold prediction? If it hits critical mass (say, 10k stars), LangGraph forks or LangChain pivots. But open source graveyards are littered with agent frameworks—CrewAI, AutoGen. Maintenance is the killer. Who’s funding this?
Is Selectools Actually Production-Ready?
Short answer: closer than most.
Async everywhere. Ollama local runs. Cheap model splits. Stall detection. It’s not vaporware.
But test it yourself—v0.20.1 just dropped. Scale to 100 agents? Unknown. That’s the agent tax: hype outpaces reality.
Who’s winning? You, if you’re building now. VCs? Not yet—they chase closed-source agent startups burning $100M on infra.
Why Does This Matter for Indie Devs?
One pip. No PhD in graph theory required. Boot an agent swarm in hours, not weeks.
Skeptical me says: try the generator interrupt. If it clicks, you’ve saved weeks of debugging. If not, uninstall and move on—zero lock-in.
Parallelism shines in research agents: fan-out queries, merge summaries. Cost: pennies with Ollama.
Enterprise? PII redaction alone justifies eval spend. GDPR headaches vanish.
The Money Question: Who’s Cashing In?
John Nichev’s GitHub. Apache license—free as in beer. But watch for LangChain integration or acquisition. They’ve absorbed plenty.
Unique insight: this echoes TensorFlow’s early modularity fail vs. PyTorch’s eager execution. Generators = eager agents. Static checkpoints? Yesterday’s news.
Prediction: by 2025, 50% agent frameworks dead. Selectools survives if community evals grow.
🧬 Related Insights
- Read more: Claude Cracks Open Liftoff’s API with mitmproxy—And Builds the Escape Hatch
- Read more: KubeOrch: Drag, Connect, Deploy — Kubernetes Without YAML Hell
Frequently Asked Questions
What is selectools?
Open-source Python framework for AI agents: multi-agent graphs, tools, RAG, evals, PII—all pip install selectools.
How does selectools compare to LangGraph?
Better interrupts via generators (resumes at yield, no re-runs). Deterministic routing, bundled features—no extras needed.
Is selectools free and open source?
Yes, Apache-2.0. Supports local models like Ollama.
Word count: 1028.