If you think the Department of Defense’s standoff with Anthropic was the end of the government’s push to access AI systems without restrictions, think again. The real battle is happening somewhere far quieter: in federal procurement rules.
While everyone was watching the DoD fight play out in headlines, another branch of the U.S. government was already working on a solution. The General Services Administration—the agency that writes the contracts for how government buys everything from office supplies to software—is quietly rewriting the rules around AI procurement. And if these rules stick, they won’t just reshape federal AI buying. They’ll reshape what AI companies are legally permitted to do.
Why This Matters Way More Than It Sounds
Look, using procurement as a policy lever makes sense. Governments should direct tax dollars toward companies that serve the public interest. That’s not controversial. Open-source software? Interoperability? Right to repair? Those are legitimate values the government should fund.
But the GSA’s new guidelines cross a line—and tech nonprofits just filed comments today (through partners at the Center for Democracy and Technology, Protect Democracy Project, and EPIC) explaining why.
What’s Actually in These Rules?
Two provisions stand out as particularly dangerous. The first requires that contractors must license their AI systems to the government for “all lawful purposes.” And here’s where it gets scary: the government’s definition of “lawful” is… well, let’s just say flexible. The feds have a demonstrated ability to find loopholes in surveillance law. They’ve been willing to break it outright. Do we really want to hand them blank checks to data-mine whatever they want?
“If a company’s safety guardrails might prevent responding to a government request, the company must disable those guardrails.”
The second provision is even more direct. AI systems “must not refuse to produce data outputs or conduct analyses based on the Contractor’s or Service Provider’s discretionary policies.” Translation: if a company built safety features to prevent misuse, those features become liabilities under federal contract. Disable them or lose the contract.
Is This Actually About Safety, or About Control?
Here’s the thing that should make your antenna go up: the government is framing this as promoting “ideologically neutral” American AI innovation. But the actual text includes provisions that are technologically incoherent—like vague “anti-woke” requirements that don’t map to anything engineers can actually build. Those aren’t safety features. They’re political litmus tests.
The real goal seems to be preventing another Anthropic situation. In that standoff, the company said no to surveillance use cases. Now the government wants a rule that makes “no” legally impossible.
What Happens If These Rules Pass?
These wouldn’t just apply to one contract. These become standard components of every federal contract going forward. If you’re an AI company and you want federal work, you’re dismantling your safety measures. And if safety features are being stripped for government contracts, why would a company maintain them for commercial use? Why eat the cost?
You end up with a chilling effect that spreads beyond government use entirely. The private sector watches. Other companies follow suit to remain competitive. What started as a procurement rule becomes an industry norm.
There’s also a historical parallel here that’s worth noting. During the Cold War, the government used procurement requirements to push contractors toward militarization. Many of those practices outlasted the geopolitical rationale for them. We’re watching something similar develop in real time with AI.
What Tech Nonprofits Are Actually Saying
The groups filing comments aren’t against government AI spending. They’re against using procurement to accomplish something that should be debated openly as policy. If the government wants to fund surveillance capabilities, that should be a vote in Congress. If it wants to restrict safety measures, that should be a regulation written in the open, with public comment periods, not buried in GSA guidelines.
Their position: the government should start over. Separate the legitimate parts (promoting American AI development, ensuring interoperability) from the weaponized parts (forcing companies to disable safeguards).
The Bigger Picture
This is a test case for how government power moves in tech policy. The Anthropic fight was visible. Everyone could watch it, form opinions, write op-eds. This procurement rewrite? It’s invisible unless you’re reading Federal Register updates. By the time most people notice, the rules are already in effect, embedded in contracts, normalized.
And that normalization is the real win for the government. Not because the GSA cares about ideology. But because once companies stop building safety features, they stop building them entirely. And you can’t demand what you’ve already decided doesn’t exist.
🧬 Related Insights
- Read more: EU AI Act Chapter V: GPAI Providers’ Enforcement Reckoning
- Read more: Warrants for U.S. Person Queries: Do They Really Cripple National Security?
Frequently Asked Questions
What does the GSA procurement proposal actually do? It would require AI companies contracting with the federal government to license their systems for “all lawful purposes” and disable any safety features that might prevent responding to government requests. These rules would apply to all federal contracts going forward.
Will AI companies actually have to follow these rules? Not yet. The rules are still in the comment period. But if they’re finalized, yes—any company that wants federal work would need to comply. This could pressure the entire industry if companies see federal contracts as essential business.
Why is this different from normal government procurement? Normal procurement buys goods that already exist (software, hardware, services). These rules would force companies to fundamentally alter how their products work before selling them to government. It’s not buying—it’s demanding structural changes as a condition of sale.