Invisible Code Supply Chain Attack Hits GitHub

A new supply-chain attack is hiding malicious code in plain sight using invisible Unicode characters. Traditional defenses? Completely useless.

Invisible Code Is Now Flooding GitHub. Your Code Review Won't Catch It. — theAIcatchup

Key Takeaways

  • 151 malicious packages using invisible Unicode characters evaded detection across GitHub, NPM, and Open VSX in March 2024
  • Traditional code reviews and static analysis tools fail against invisible code because the malicious payload doesn't appear on screen
  • An LLM-powered attack group called Glassworm is likely behind the coordinated campaign, generating convincing legitimate-looking changes at scale

151 malicious packages in a single week. That’s what Aikido Security found flooding GitHub, NPM, and Open VSX in early March—and here’s the kicker: your code review probably won’t catch a single one.

This isn’t your garden-variety typosquatting attack where some jerk uploads a package named react-dom-ui hoping you’ll fat-finger the install command. This is something far more insidious. The malicious packages look legitimate. They smell legitimate. They have documentation, version bumps, bug fixes, refactors—all the trappings of a real contribution. But buried in there, hidden behind Unicode characters that are invisible to the human eye and to most code editors, terminals, and review platforms, is executable malware. You could stare at the code for hours and never see it.

“The malicious injections don’t arrive in obviously suspicious commits. The surrounding changes are realistic: documentation tweaks, version bumps, small refactors, and bug fixes that are stylistically consistent with each target project.”

That’s the Aikido researchers describing what they found. And if you’re thinking this sounds almost… sophisticated… you’d be right.

How Invisible Code Actually Works

The technique itself is straightforward. Unicode offers thousands of characters—many of which render as nothing on screen. Zero width joiners. Invisible separators. Homoglyph attacks. You name it. Attackers can embed executable code using these characters, and when your terminal or editor loads the file, it displays the clean, legitimate code. But when the JavaScript interpreter or Python runtime parses it? It sees the hidden payload. The magic trick works because humans read what’s displayed, but machines execute what’s actually there.

What makes this particularly nasty is that static analysis tools—the kind designed to detect malicious patterns—can see the hidden code. But they also have to wade through massive amounts of false positives. At scale, that becomes practically impossible.

And we are talking about scale here. The researchers suspect an attack group they’re calling Glassworm is behind this. The sophistication, the volume, the consistency across different codebases? They think an LLM is generating these packages. Koi Security, which independently tracked the same group, agrees. As the Aikido team put it: “At the scale we’re now seeing, manual crafting of 151+ bespoke code changes across different codebases simply isn’t feasible.”

Why Your Current Defenses Are Failing

This is where things get depressing for security teams. The traditional playbook—code review, static analysis, dependency scanning—assumes that malicious code will at least look weird. A suspicious import. An obvious payload. A suspicious network call.

But when the malicious bits are literally invisible, when they’re wrapped up in legitimate-looking changes, when they’re generated at scale to mimic the style of real developers… well, you’re basically asking humans to find a needle in a haystack while someone’s actively hiding the needle with ninja-level precision.

Static analysis tools can theoretically catch this. But they’d need to be tuned specifically for Unicode-based obfuscation, and they’d need to be smart enough to distinguish between legitimate Unicode usage (which exists) and malicious usage (which now also exists). Most organizations aren’t running that kind of inspection on every dependency. And if they did, the performance hit would be staggering.

What Actually Happened Here

Let’s ground this in the real attack. Aikido found the malicious packages between March 3 and March 9. They were uploaded to legitimate repositories—GitHub, NPM, Open VSX. The packages had legitimate-looking names. Some of them were downloaded thousands of times before being detected. We don’t know exactly what the payloads do (the researchers didn’t spell it out), but in supply-chain attacks like this, the goal is usually one of a few things: stealing credentials, installing backdoors, exfiltrating source code, or establishing persistence for future attacks.

The fact that it took security researchers to find this—and that it wasn’t caught by the repositories’ own automated systems—should tell you something about the current state of open-source security infrastructure.

The LLM Angle Changes Everything

Here’s the uncomfortable truth that nobody wants to say out loud: AI is making it easier to commit sophisticated attacks. A human attacker would need to understand multiple codebases, match their coding styles, and craft convincing commits. That’s hard. That’s time-consuming. But an LLM? Point it at a GitHub repository, tell it to generate a pull request that looks native, include some invisible malicious code, and boom—you’ve got a production-ready attack that scales across dozens or hundreds of repos.

We’ve spent years worried about AI being used to generate phishing emails or deepfakes. Nobody talks about AI being weaponized for supply-chain attacks, but that’s exactly what’s happening here. And the defenders are still playing the old game.

What Comes Next

So what do you do? The honest answer is: this is hard to solve at the individual developer level. You can’t expect humans to review invisible characters. You can’t reasonably expect traditional tools to catch something that’s designed specifically to bypass them.

The real solutions require infrastructure changes. Repository platforms need to sanitize invisible Unicode characters—or at least flag them loudly. Dependency managers need stricter validation. Maybe we need runtime monitoring that can detect unexpected behavior from supposedly-benign packages. Maybe we need better provenance tracking so you can verify who actually wrote the code you’re running.

But here’s what probably won’t happen: none of this will be implemented quickly. GitHub, NPM, and the rest will patch some obvious holes. They’ll issue a stern blog post about supply-chain security. And then attackers will iterate. They’ll find new invisible characters. They’ll tweak the technique. Because the attack surface is massive, the defenders are fragmented, and the payoff for attackers is huge.

Meanwhile, if you’re a developer maintaining a popular open-source package? You’ve just got one more thing to worry about. Great.


🧬 Related Insights

Frequently Asked Questions

What is invisible code in a supply chain attack? Malicious code hidden using Unicode characters that don’t display in editors or terminals but execute when interpreted by compilers or runtimes. It allows attackers to embed payloads in otherwise legitimate-looking packages.

Can I detect invisible code in my dependencies? Manually? No. With tools? Maybe, but you’d need specialized Unicode obfuscation detection that most organizations don’t currently run. Your standard code review won’t catch it.

Is my project at risk from Glassworm? If you’re running packages from GitHub, NPM, or Open VSX, theoretically yes. Aikido found 151 malicious packages in a single week. Regular dependency audits and being cautious about new or obscure packages help, but there’s no silver bullet.

Priya Sundaram
Written by

Hardware and infrastructure reporter. Tracks GPU wars, chip design, and the compute economy.

Frequently asked questions

What is invisible code in a supply chain attack?
Malicious code hidden using Unicode characters that don't display in editors or terminals but execute when interpreted by compilers or runtimes. It allows attackers to embed payloads in otherwise legitimate-looking packages.
Can I detect invisible code in my dependencies?
Manually? No. With tools? Maybe, but you'd need specialized Unicode obfuscation detection that most organizations don't currently run. Your standard code review won't catch it.
Is my project at risk from Glassworm?
If you're running packages from GitHub, NPM, or Open VSX, theoretically yes. Aikido found 151 malicious packages in a single week. Regular dependency audits and being cautious about new or obscure packages help, but there's no silver bullet.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Ars Technica - Tech

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.