AI Military Use: Anthropic, OpenAI, and Autonomous Weapons

Anthropic's Claude isn't just writing emails anymore. It's reportedly orchestrating military strikes. As AI companies slide deeper into defense contracts, the line between software vendor and weapons manufacturer has become impossible to ignore.

How AI Companies Became Weapons Manufacturers—And Why It's a Legal Reckoning — theAIcatchup

Key Takeaways

  • Anthropic's Claude is reportedly central to AI-driven military operations, blurring the line between software vendor and weapons manufacturer in real time.
  • Current U.S. law provides little accountability for AI companies whose models are weaponized downstream by governments or defense contractors, thanks to vague licensing agreements and Section 230 protections.
  • The liability question—who bears responsibility when AI systems recommend targets that kill civilians—remains legally orphaned, with no clear path to accountability under existing frameworks.
  • Tech companies have the contractual power to restrict military use but strategically choose not to, embedding themselves in defense infrastructure while maintaining plausible deniability.

Is your favorite AI chatbot also someone’s military command system?

For years, the conversation about AI-driven warfare lived in the realm of speculation—a distant worry for defense think tanks and sci-fi writers. Then it wasn’t theoretical anymore. Anthropic’s Claude, the same large language model that thousands of companies use for customer service and content generation, has reportedly played a central role in coordinating military operations. Within the first 12 hours of action, more than 900 strikes were launched. One targeted Iran’s supreme leader. Another hit a girls’ elementary school, killing at least 165 people, mostly students.

That’s not a dystopian movie plot. That’s AI military capability in the real world, right now, and it forces a question that tech executives and policymakers have been dodging for years: What is the legal and moral liability when a software company’s product becomes a weapon?

The AI-Defense Industrial Complex Is Here

Anthropically and OpenAI didn’t wake up one morning and decide to become weapons manufacturers. That’s not how it works. It happens slowly, then all at once.

Both companies have been scouting government contracts for years. OpenAI has partnerships with the U.S. Department of Defense (though it’s been cagey about specifics). Anthropic? It’s been positioning itself as the “responsible” AI alternative—safety-conscious, aligned with human values, the kind of place that sounds good in a press release.

Then the military came knocking, and suddenly the theoretical ethics discussions became operational questions. Because here’s the thing: once you hand your model to a defense contractor or military authority, you lose meaningful control over how it gets used. It becomes someone else’s tool. Your disclaimers don’t follow it into the strike zone.

“What happens when these companies and governments start building systems that help decide who lives and who dies in a war.”

That’s not just a business question. It’s a legal powder keg.

Why Can’t AI Companies Say No (Legally)?

You’d think there’d be some obvious legal mechanism preventing this. There isn’t—not really. The U.S. has export controls on dual-use technologies (things that can be used militarily or civilly), but large language models exist in a gray zone. They’re not classified as weapons. They’re software. Defense contractors can license them, integrate them into systems, and deploy them under government authority. The original company that built them? Often has no contractual right to know what happens downstream.

Contracts matter here. If Anthropic didn’t build explicit restrictions into its licensing agreements—restrictions that prohibit military use or require transparency—then legally, there may be little stopping the Pentagon from using Claude exactly as it has. The company could have said no at the negotiation table. Once the ink dries? The use is gone.

There’s also the question of plausible deniability. If Anthropic says it didn’t intend for its model to be used this way, does that shield it from liability when civilians die as a result of system recommendations? International humanitarian law says no. The Geneva Conventions hold actors accountable for foreseeable civilian harm, regardless of intent.

But here’s where it gets messy: is civilian harm from an AI system “foreseeable”? The company can argue the model is just providing analysis, that humans make the final decisions, that they’re not responsible for how a general uses the output. The legal system hasn’t tested those arguments yet—not at this scale.

The Heidy Khlaaf Question Nobody’s Answered

Heidy Khlaaf, the chief AI scientist at the AI Now Institute and an expert in AI safety within defense and national security, worked at OpenAI before shifting her focus to autonomous weapons accountability. She’s seen this from both sides: the inside of a frontier AI lab and the outside perspective of someone studying the fallout.

Her core insight, implicit in her career pivot, is brutal: these companies know what they’re building. They know it can be weaponized. And they’re moving forward anyway, wrapped in enough plausible deniability and legal gray area to avoid catastrophic PR consequences.

The problem is structural. A software company selling to the U.S. government isn’t selling weapons—it’s selling capabilities. The government does the weaponization. The company maintains separation. Legally, everyone’s clean. Morally?

That’s where it falls apart.

Who’s Actually Liable When Civilians Die?

This is the question that should keep tech executives awake at night—but probably doesn’t, because the answer is murky.

If Claude recommended a target, and that recommendation was integrated into a military command system, and that system launched a strike that killed 165 schoolchildren, who bears responsibility? Anthropic for building the model? The defense contractor for integrating it? The military officer who executed the order? All of them? None of them?

International law suggests all of them. But enforcement is another matter. The U.S. doesn’t recognize the International Criminal Court for its citizens. Domestic courts have been reluctant to hold companies liable for actions taken by their corporate customers using their products (thanks, Section 230). And the government? It has sovereign immunity.

What you end up with is a kind of legal orphan—a catastrophic outcome with no clear accountability structure. Which means, perversely, every incentive points toward building the system and hoping the legal architecture catches up later (it won’t, not before the next war).

What Does This Mean for the AI Industry?

This is the inflection point that separates the venture-backed startup version of AI from the real-world version.

For Anthropic, Claude’s deployment in a military context is validation and liability in equal measure. Validation: the model works at scale in high-stakes environments. Liability: the company is now enmeshed in an outcome that will be scrutinized, litigated, and potentially prosecuted. Even if legal liability is fuzzy, political liability is sharp.

OpenAI has been here longer and has handled it with characteristic ambiguity—partnerships with defense entities, public statements about responsible AI, and strategic silence about specifics. It’s a playbook, and it works until it doesn’t.

The calculus changes if you’re not a frontier AI lab but a downstream company. If you integrate Anthropic’s API into your product, you now have contractual exposure to how military customers use your application. If you’re a venture capital firm funding AI companies, you need to ask whether government contracting revenue is worth the regulatory and reputational blowback.

And if you’re a developer or employee at these companies? You’re starting to work in an industry that’s openly moved past the ethics question and into the optimization phase.

The Silence of the Enterprise Market

One more thing worth noting (and this is where the story gets darker): the enterprise AI market—companies using LLMs for everything from customer service to document review to compliance—is about to face the same friction.

When governments can requisition AI systems for military purposes, what happens to commercial deployment? Do you build kill switches into your models? Geopfencing that prevents military use? Those are engineering problems with political answers, and companies will resist them because they cost money and constrain revenue.

Better to build the system, get the funding, and hope the legal system doesn’t catch up.


🧬 Related Insights

Frequently Asked Questions

Can AI companies be held legally responsible for military use of their models?

Under current U.S. law, the answer is murky. International humanitarian law holds actors accountable for foreseeable harm from their systems, but enforcement mechanisms are weak. Domestic courts have been reluctant to hold software companies liable for downstream use by customers, citing Section 230 protections. The real liability depends on what’s written into the licensing contract—and tech companies have been careful to leave those clauses vague.

Will the U.S. government regulate AI military use?

Not proactively. The Pentagon and intelligence agencies benefit from the current ambiguity. Congressional interest exists, but not the political will to restrict U.S. military capabilities. You might see rules for other countries’ AI weapons, but not for American ones. Expect regulation to follow disaster, not precede it.

Should I work for an AI company that sells to the military?

That’s a personal ethics question, not a legal one. Know what you’re building and who it goes to. If that conflicts with your values, your options are clear: work elsewhere, or accept the moral weight. Companies are betting you won’t.

Elena Vasquez
Written by

Senior editor and generalist covering the biggest stories with a sharp, skeptical eye.

Frequently asked questions

Can AI companies be held legally responsible for military use of their models?
Under current U.S. law, the answer is murky. International humanitarian law holds actors accountable for foreseeable harm from their systems, but enforcement mechanisms are weak. Domestic courts have been reluctant to hold software companies liable for downstream use by customers, citing Section 230 protections. The real liability depends on what's written into the licensing contract—and tech companies have been careful to leave those clauses vague.
Will the U.S. government regulate <a href="/tag/ai-military-use/">AI military use</a>?
Not proactively. The Pentagon and intelligence agencies benefit from the current ambiguity. Congressional interest exists, but not the political will to restrict U.S. military capabilities. You might see rules for *other countries'* AI weapons, but not for American ones. Expect regulation to follow disaster, not precede it.
Should I work for an AI company that sells to the military?
That's a personal ethics question, not a legal one. Know what you're building and who it goes to. If that conflicts with your values, your options are clear: work elsewhere, or accept the moral weight. Companies are betting you won't.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by AI Now Institute

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.