FLI boasts 35 full-time staff across the US and Europe, grinding away at AI risks since 2014. Last week, they dissected the White House’s National Security Memorandum on AI. And their verdict? A half-cheer, laced with warnings.
Look. The memo tells national security agencies to shape up on AI procurement, shield infrastructure from foreign meddlers, vet risky models, and bulk up government talent. It crowns the Commerce Department’s AI Safety Institute as the go-to evaluator. Nice on paper. But FLI’s Hamza Chaudhry isn’t popping champagne.
“The National Security Memo released this week is a critical step toward acknowledging and addressing the risks inherent in unchecked AI development — especially in the areas of defense, national security, and weapons of war.
“This memo contains many commendable actions, efforts, and recommendations which, if implemented, will make the United States and the world safer from the threats and uncertainties of AI — including empowering the U.S. Department Commerce’s AI Safety Institute and increasing AI expertise across key government agencies and departments.”
That’s the upbeat part. Chaudhry keeps it real, though.
White House AI Memo: Safety Net or Security Blanket?
Here’s the thing—this NSM reeks of urgency, timed right before an election that could flip the script on tech policy. Agencies get nudged to evaluate models posing ‘security and safety risks to the public.’ Vague enough? Sure. It pushes the National AI Research Resource to share goodies with academics and civil society. Sounds collaborative. Except FLI spots the elephant: a ‘purely competitive approach’ to AI.
And. They’re not wrong. Picture this: US vs. China in an AI arms race, models getting deadlier by the month. The memo authorizes safety checks but stops at voluntary nods. No fines. No bans. Just guidance. FLI pushes back hard—cooperate with competitors, they say, or kiss stable governance goodbye.
“Working actively with strategic competitors, in addition to close allies and partners, is critical to guarantee responsible AI development and advance US national security interests. Lack of cooperation will make it harder to cultivate a stable and responsible framework to advance international AI governance that fosters safe, secure, and trustworthy AI development and use.”
Punchy. True. History screams it—Manhattan Project secrecy birthed nukes, but Cold War near-misses forced arms control talks. AI’s no different. Ignore rivals now, and you’re building smarter bombs without brakes.
Does the NSM Actually Fix AI’s National Security Nightmares?
Short answer: Nope. Not yet.
Dive deeper. Procurement rules? Agencies must prioritize ‘secure and trustworthy’ AI. Great—until you realize Big Tech vendors dominate, with their own risk gloss-overs. Infrastructure protection from ‘foreign interference’? Hackers laugh at memos. Model evaluations? The AI Safety Institute gets the nod, but under Commerce? That’s the trade guys, not DARPA hardliners. Talent-building? Sure, but government’s AI brain drain is legendary—private sector pays better.
FLI nails it: this is ‘just the start.’ They demand ‘compulsory requirements for safe AI.’ Voluntary commitments? We’ve seen that movie—companies nod, then sprint ahead. Remember OpenAI’s safety pledges? Now they’re chasing AGI with minimal oversight.
My hot take, absent from FLI’s polite prose: this memo’s a PR shield for the administration. Dropped amid election heat, it lets Biden tout AI leadership without ruffling defense hawks or Silicon Valley donors. Bold prediction—post-election, it’ll gather dust unless Congress mandates it. Trump 2.0? Deregulation city. Harris? Incremental tweaks. Either way, without international teeth, it’s theater.
But wait—unique angle. Think back to the 1972 Biological Weapons Convention. US pushed unilateral moratoriums first, then dragged Soviets to the table. AI needs that playbook: lead with transparency, not just walls. FLI’s cooperation call echoes it perfectly, but Washington’s too busy saber-rattling.
Why FLI’s Getting Louder on AI Governance
FLI isn’t some fringe outfit. Oldest AI think tank. 35 pros. They’ve shaped discourse—from pausing giant models to UN treaties. Chaudhry’s statement? Calculated jab. Praise to build bridges, cautions to prod action.
Yet skepticism reigns. The memo empowers the AI Safety Institute—good. But as primary ‘port of contact’? That’s a bottleneck. What if evaluations clash with Pentagon needs? Weapons autonomy’s the ghost in the machine here. NSM nods to risks in ‘defense’ and ‘weapons of war,’ but no red lines on lethal autonomous systems. Killer drones with AI brains? Still on the menu.
And the election kicker.
“While the release of this memo represents progress in safeguarding AI deployment in national security and beyond, it represents just the start of the urgent action needed. It is vital that the government move from voluntary commitments toward compulsory requirements for safe AI. And it’s essential that the memo’s prescriptions and recommendations are turned into policy — no matter what the results of the upcoming election.”
Dry humor alert: ‘No matter the election.’ Yeah, right. Politics gonna politic.
Expand on risks. Unchecked AI in security? Think hacked models leaking nukes codes, or adversarial attacks turning surveillance into sabotage. Foreign interference? China’s pouring billions—US can’t solo it. FLI’s right: allies plus competitors, or bust.
Is Cooperation with China on AI a Pipe Dream?
Hell yes, it feels dreamy. But necessary.
US-China AI talks fizzled post-2018 trade war. Now? Export controls choke chips, fueling paranoia. Memo sidesteps this, focusing inward. FLI begs to differ—stable frameworks demand dialogue. Historical parallel: ozone layer. Enemies cooperated on CFCs. AI extinction risks dwarf that.
Critique the spin. White House frames NSM as bold leadership. FLI peels it back: commendable, sure, but incomplete. Corporate hype parallel—Nvidia touts safe AI while raking billions. Governments mirror it.
Deeper still. Implications for devs? More red tape on fed contracts. Model-makers? Safety audits loom. Public? Safer infrastructure, maybe. But without mandates, it’s window dressing.
Wander a bit—FLI’s Europe arm adds muscle, pushing EU AI Act synergies. US memo could harmonize, but competition clause kills it.
Bottom line: Progress. Barely. FLI’s acerbic nudge? Keep pushing for laws, not letters.
🧬 Related Insights
- Read more: EU’s AI Office Drops Fat Salaries for Legal Hired Guns – Skeptical Eyes on the Fine Print
- Read more: Meta Ray-Bans: Spying on the World Through Your Eyes
Frequently Asked Questions
What is the White House National Security Memorandum on AI?
It’s guidance for US agencies on AI use, procurement, security, risk evaluation, and talent—authorizing the AI Safety Institute as lead evaluator.
Does the NSM make AI development safer?
Partially—it promotes evaluations and expertise, but FLI says it’s voluntary and lacks international cooperation, needing compulsory rules.
Will the 2024 election kill this AI memo?
FLI hopes not, urging policy conversion regardless—but politics could derail it fast.