AI Use in Malware: Current State Analysis

Your browser cookies are still at risk from basic infostealers, but the AI twist? It's more theater than terror. Here's why the hype around AI-powered malware doesn't match reality—for now.

Screenshot of .NET infostealer code calling OpenAI GPT-3.5 API endpoints

Key Takeaways

  • AI in malware is mostly non-functional hype: unused API calls add noise, not power.
  • Remote LLM C2 via OpenAI is traceable and costly—easy for defenders to spot.
  • No wild local agentic AI yet; deployment hurdles keep it theoretical.

Imagine waking up to find your passwords snatched—not by some genius AI overlord, but by a script-kiddie who copy-pasted ChatGPT prompts into sloppy .NET code. That’s the current state of AI use in malware. For everyday folks grinding away at their desks, it means the cyber boogeyman hasn’t leveled up much. Sure, hackers are poking at large language models like OpenAI’s GPT-3.5-Turbo, but Unit 42’s latest OSINT hunt shows it’s all flash, no bang.

Look, I’ve chased Silicon Valley’s promises for two decades—self-driving cars that’d end traffic deaths (ha), VR worlds we’d never leave (yawn). Now it’s AI arming cybercriminals. But this report? It cuts through the buzz. Two samples: one infostealer faking smarts with unused LLM calls, another Golang dropper that queries an AI to ‘assess’ your machine before infecting. Real people? You’re safer than the headlines scream.

Why Bother with AI-Written Malware at All?

Threat actors love a shortcut. Unit 42 pegs three use cases: AI spits out code, runs remote C2 decisions, or handles local agentic flows. Guess which one’s missing in the wild? That last one—local AI brains baked into malware. Why? Deploying models on victim boxes is a nightmare, says the report. Embed ‘em? Even tougher.

But here’s the cynical vet’s take: this echoes the 2010s Flash exploit gold rush. Everyone hyped zero-days as the endgame; most were sloppy phishing anyway. AI lowers the bar for noobs, sure—type a prompt, get rusty malware. Yet Unit 42’s telemetry shows pros still sweat the basics: evasion, persistence. No AI panacea.

And those samples. First up, the .NET infostealer—packed with ConfuserEx 2, calling OpenAI APIs like it’s auditioning for a sci-fi flick.

“This integration with OpenAI indicates the malware may enable a lower skilled threat actor to interact with an infected environment without having to learn lateral movement, data collection and persistence techniques themselves.”

Sounds scary. Reality? Four functions—GenerateEvasionTechnique, AnalyzeTargetEnvironment, etc.—all DOA. Prompts truncated, responses ignored. It’s AI Theater, as Unit 42 dubs it. Console logs tease scare messages, EDR dodges. None work. Exfil to C2 happens sans AI help. Noise for defenders to spot, really.

Two near-identical samples. Same flaws. Low-skill actor testing waters? Or just cargo-cult coding from GitHub?

Short version: it’s functional as a plain stealer. Cookies, system info—stolen fine. AI? Window dressing that screams ‘scan me.’

Is OpenAI’s API Malware’s New Best Friend?

Shift to the dropper: Golang beast, flagged on X as a Sliver precursor. It pings GPT-3.5-Turbo to eyeball your environment—sandbox? AV? Proceed or bail?

Smart in theory. AI-Gated Execution. Query the LLM: ‘Is this safe to infect?’ Get a yay/nay. Beats hardcoded checks.

But. Unit 42 dissects it: prompt’s vague, responses parsed poorly. Sometimes it infects sandboxes anyway. HTTP API calls light up networks like Christmas—OpenAI domains in malware traffic? IOC goldmine.

I’ve seen this movie. Remember botnets dreaming of ML evasion? Died fast—models too fat, latency kills stealth. Here, remote C2 via paid APIs? Traceable, rate-limited, costly. OpenAI bans suspicious keys quick. Who’s footing the bill—Russian script-kiddie on a Visa gift card?

Unique angle you won’t find in Unit 42’s post: this is PR bait for defenders. Palo Alto touts protections, sure—but it’s selling fear of the future to lock in contracts today. History repeats: post-WannaCry, everyone bought EDR. AI malware? Same playbook, zero real evolution yet.

Diggers deeper. Infostealer collects browsers, files—dumps to disk, zips to C2. Obfuscated, but strings leak OpenAI endpoints. EvasionTechnique prompt cuts off mid-sentence: “Generate a simple evasion technique for a data extraction tool. Return only the t”—incomplete. Laughable.

Dropper’s no better. LLM decides on Sliver payload drop. But env checks simplistic: processes, VMs. AI adds… what? Flair? Cost?

What Keeps Real AI Malware at Bay?

Challenges abound. Local models? Malware bloats to gigabytes—won’t fly. Remote? APIs log everything, block abuse. Fine-tuning? Needs data, compute—APT territory, not forum trolls.

Prediction: watch nation-states. China’s got LLM chops; pair with Cobalt Strike? Now we’re talking. But street-level? Still phishing kits from 2015.

For you, the email-clicker: update Chrome, use MFA. This ‘AI malware’ wave? Overhyped like crypto ransomware that never mined enough to pay electric.

Palo Alto plugs their Unit 42 services—fair. But customers? WildFire, Cortex XDR snag these easy. API calls = dead giveaway.

Bottom line after 20 years: tech fixes lag threats, but hype fixes nothing. AI in malware’s embryonic—clumsy, detectable. Sleep easy, but eyes open.


🧬 Related Insights

Frequently Asked Questions

Will AI malware replace traditional hackers?

Nah, not yet. It aids noobs, but pros stick to battle-tested tools. Local AI’s too clunky.

How does OpenAI API show up in malware traffic?

HTTP POSTs to api.openai.com with JSON prompts—network logs light it up. Block or monitor those domains.

Is my antivirus enough against AI-powered threats?

For these samples, yes. But watch for evolution—remote C2 via LLMs is the vector to flag.

Elena Vasquez
Written by

Senior editor and generalist covering the biggest stories with a sharp, skeptical eye.

Frequently asked questions

Will AI malware replace traditional hackers?
Nah, not yet. It aids noobs, but pros stick to battle-tested tools. Local AI's too clunky.
How does OpenAI API show up in malware traffic?
HTTP POSTs to api.openai.com with JSON prompts—network logs light it up. Block or monitor those domains.
Is my antivirus enough against AI-powered threats?
For these samples, yes. But watch for evolution—remote C2 via LLMs is the vector to flag.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Palo Alto Unit 42

Stay in the loop

The week's most important stories from The AI Catchup, delivered once a week.