Multi-AI Development: Building aeoptimize CLI

Imagine ditching the endless back-and-forth with one AI. This dev built a slick CLI by firing off tasks to three AIs at once — and the results? Faster code, caught vulnerabilities, better tools for everyone scraping for AI attention.

One Dev's Mad Experiment: Building aeoptimize by Dispatching Claude, Gemini, and Copilot in Parallel — theAIcatchup

Key Takeaways

  • Multi-AI workflows split tasks by model strengths for faster, higher-quality code.
  • aeoptimize CLI audits sites for AI readability with rules + optional AI consensus.
  • Parallel AIs act as adversarial reviewers, exposing dev blind spots like SSRF vulnerabilities.

Your website’s vanishing from AI overviews. Not because it’s bad — but because ChatGPT, Perplexity, or Google’s AI can’t “read” it right. Enter aeoptimize, the open-source CLI that scores your site for AI readability, and the wild multi-AI workflow that birthed it.

Devs, this hits home. You’re churning code, but one AI chat feels like herding cats — context lost, blind spots everywhere. What if you could parallelize your brain trust? Dispatch chunks to Claude for deep reasoning, Gemini for quick plugins, Copilot for framework hacks. That’s how aeoptimize got made, and it’s a blueprint for your next project.

Why Does Multi-AI Development Feel Like Cheating?

Look, single-AI dev is sequential drudgery: prompt, paste, tweak, swear. But this? Parallel pipelines. The creator sliced the pie by AI strengths — Claude’s long-context wizardry for the 17-rule scoring engine, parsing HTML/Markdown, spitting out llms.txt and JSON-LD schemas. Gemini whipped up a 52-line Vite plugin on first try, hooking into configResolved and closeBundle for auto-build goodies. Copilot mirrored it for Next.js, 55 lines, framework-fluent.

It’s not random. Claude ate the security audits — four rounds, SSRF shields, shell injection blocks. Why? That beast holds context across adversarial reviews. The plugins? Speed demons needed, idioms matched to ecosystems they’ve “seen” more.

And here’s the kicker — my unique spin: this echoes the 1940s Manhattan Project’s division of labor. Physicists didn’t solo the bomb; specialists tackled chunks in parallel. Today, AIs are your Oppenheimer team. Corporate hype calls it “agentic workflows,” but it’s just smart outsourcing, minus the ethical baggage.

“The redirect issue is worth dwelling on. The original code validated the initial URL — that seemed sufficient. An adversarial reviewer asked what happened when a validated URL redirected somewhere else. The answer was not good.”

That quote nails it. Post-v1, they sicced Codex as attacker. Four high-severity bugs: unpinned npx in GitHub Actions (hello, supply-chain hack), pre-commit scanning wrong tree, hook clobbering others, SSRF via redirects.

Fixed in v0.5.2. Bundled CLI for Actions. git show for staged diffs. Delimited hooks. Private IP checks per hop.

Brutal efficiency.

How Does aeoptimize Actually Work Under the Hood?

Fire it up: npx aeoptimize scan your-site.com. Offline rule engine cranks 17 deterministic rules across five dimensions — no API calls, no limits. Score spits out, say, 72/100.

Add –multi-ai? Magic. Checks for gemini/copilot CLIs, ships page content, merges verdicts.

Two AIs? 50/50 rule vs. consensus. One? 60/40 rules-heavy. None? Pure rules.

Insights like Gemini’s “Missing llms.txt reduces discoverability” or Copilot’s FAQ schema nag. It’s external eyes on your content — assumptions you can’t spot because you’re the author.

Same as code review. You write validation; it “works.” Adversary pokes: redirects? Boom, internal IP leak.

This scales to content. Your FAQ golden, but no FAQPage schema? AI skips it. llms.txt absent? Crawlers blind.

Plugins ship in one npm pack: aeoptimize/vite, aeoptimize/next. Build-time auto-gen.

Is Multi-AI the Future — Or Just a Gimmick?

Skeptical? Me too, at first. Hype machines promise “swarms,” but this is pragmatic. No orchestrators, no Kubernetes for AIs — just dispatch by fit.

Predictions: expect npm plugins for this. VS Code extensions pinging multiple models. GitHub Copilot gets siblings.

For real people — indie devs, indie sites — it’s gold. Score your blog, fix schema gaps, climb AI citations. No more ghosting by Perplexity.

Security parallel shines brightest. Self-review misses loads; external AIs don’t care about your “it works” vibe.

But caveats. Relies on local CLIs — install friction. Rules deterministic, but AI consensus? Still black-box-ish. And if one AI hallucinates?

Weights mitigate: rules anchor it.

Wander a bit: remember early Git? Linus solo’d core, community plugins. Here, AIs as first-pass community.

GitHub: https://github.com/dexuwang627-cloud/aeoptimize

npm: https://www.npmjs.com/package/aeoptimize

Try it. Split your next tool this way. Curious? How do you divvy AIs?

What Does aeoptimize Score, Exactly?

Five dimensions, 17 rules. Parser rips HTML/Markdown. Checks structured data, crawl hints, readability proxies.

llms.txt? Like robots.txt for LLMs — your content’s VIP list.

JSON-LD? Schema.org juice for entities.

Pre-commit hook scans staged changes. No more merging crap.

Dense, right? But that’s the shift: AI dev isn’t replacement — it’s amplification. Parallel AIs catch what solo misses, build faster.

For open-source beat, this democratizes pro workflows. No PhD needed; npm install, npx scan.


🧬 Related Insights

Frequently Asked Questions

What is aeoptimize and how do I use it?

It’s a CLI scoring sites for AI crawler friendliness. Run npx aeoptimize scan your-site.com for a quick audit.

Does multi-AI development speed up coding?

Yes — parallel tasks cut iteration time, use strengths, catch blind spots like that SSRF redirect.

Is aeoptimize free and open source?

Totally. MIT license on GitHub, npm install aeoptimize.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

What is aeoptimize and how do I use it?
It's a CLI scoring sites for AI crawler friendliness. Run npx aeoptimize scan your-site.com for a quick audit.
Does multi-AI development speed up coding?
Yes — parallel tasks cut iteration time, use strengths, catch blind spots like that SSRF redirect.
Is aeoptimize free and open source?
Totally. MIT license on GitHub, npm install aeoptimize.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Dev.to

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.