Governor Gavin Newsom’s pen scratches across SB 53 in a Sacramento ceremony, and suddenly everyone’s an AI safety expert.
Michael Kleinman, the Future of Life Institute’s US policy head, couldn’t contain his glee. He’s out there applauding — loudly. But let’s pump the brakes. Is this really the landmark AI safety legislation it’s cracked up to be, or just California doing what California does: virtue-signaling with a side of tech pandering?
Look. Across the country, folks want guardrails on AI. Kleinman cites polls: 82% of Republicans okay with limits on what these digital Frankensteins can do. Over 70% back government safety standards. Fine. Voters are spooked — chatbots hallucinating, deepfakes ruining elections, job-stealing bots. Who wouldn’t want a leash?
What the Hell is SB 53 Anyway?
Short answer: California’s stab at reining in advanced AI systems. Think requirements for safety testing, disclosures on training data, maybe even watermarking outputs. Details are fuzzy — because lawmakers love that vagueness — but it’s aimed at the big frontier models, the ones that could, say, design bioweapons or crash markets.
Newsom signed it amid bipartisan panic. Remember that Senate vote this summer? 99-1 to kill a federal moratorium on state AI laws. States said no thanks, we’ll handle it. And here we are.
Kleinman’s statement drips optimism:
We applaud Governor Newsom for signing this vital legislation. Across America, the demand for stronger AI legislation continues to grow, with large majorities of both Republicans and Democrats calling for common-sense AI safeguards, including 82% of Republicans who agree there should be limits on what AI is allowed to do, and more than 70% of voters who support the government taking action to set safety standards.
Nice quote. Polished. But here’s my unique take: this echoes the 1960s auto safety wars. States like Wisconsin mandated seatbelts first, forcing Detroit’s hand before Nader’s federal push. AI’s on that path — fragmented rules breeding chaos, until Washington wakes up. Prediction? By 2026, we’ll have a patchwork of 50 state AI laws, turning compliance into a nightmare for OpenAI and pals. Genius.
But wait. Kleinman admits more work’s needed. “Basic protections,” he calls it — like those for pharma or planes. Or sandwich shops? Cute analogy. Your local deli doesn’t hallucinate E. coli recipes, Mike.
Is SB 53 Actually Enforceable?
Enforcement. That’s the rub. California regulators already drown in gig worker suits and privacy probes. Who’s policing GPT-7? Underfunded AG’s office? Dream on.
And the loopholes. SB 53 targets “advanced” AI, but what’s advanced? A model bigger than GPT-4? Open-source whizzes will skirt it overnight. Companies lobby hard — Anthropic, xAI, they’re FLI donors too (subtle influence?). This bill’s more theater than titanium.
Dry humor time: If AI safety were a sandwich shop, SB 53’s the “no raw chicken” sign — ignored when the lunch rush hits.
Critics — me included — smell PR spin. Newsom’s up for re-election vibes? Tech overlords like Altman nod approval because it’s toothless. Real safety? Ban military AI contracts. Cap compute. Nah, too spicy.
States stepping up fills a federal void. Kleinman nails that: “Unless and until there are strong federal AI safety standards… both blue and red states will have no choice.” Spot on. Texas, Florida — they’re next. Red states hate Big Tech as much as blue ones fear Skynet.
But fragmentation? Disaster. Imagine software that works in Cali but bricks in Utah. Devs weep.
Why Kleinman and FLI Are All In
Future of Life Institute — oldest AI think tank, 35 staffers jetting US-Europe. Founded 2014 to steer tech from doom. They’ve got cred: funded Musk’s early x-risk work. Kleinman’s their policy bulldog.
His full statement pushes the narrative: landmark moment, safeguards like every industry. While more remains — understate much?
Here’s the thing. FLI’s not wrong on public demand. Polls scream it. But applauding half-measures? That’s how we got crypto Wild West.
Skepticism mode: This bill dodges the real beast — AGI timelines. If models hit superintelligence by 2030 (per some forecasts), SB 53’s a paper shield against nukes.
Will Federal AI Laws Finally Happen?
Short term? No. Congress gridlocked. Biden’s EO was candy-ass weak. Senate’s 99-1 vote shows states own this now.
Bold call: Trump’s back in 2024? He deregulates. Harris? Patches federal quilt. Either way, states lead till 2027.
Impact on jobs, kids, communities? Kleinman invokes them. Valid fears — AI tutors displacing teachers, fake nudes scarring teens. But legislate thoughtfully, or kill innovation.
Wander a sec: Pharma regs took decades to perfect. Planes crash-tested post-disasters. AI’s moving warp speed. Can we?
Punchy truth. SB 53’s a start. Barely. Newsom gets a golf clap. FLI gets their win. But don’t pop champagne yet.
Corporate hype alert: Tech PR spins this as harmony. Bull. It’s war — safety vs. speed. Choose poorly, regret eternally.
🧬 Related Insights
- Read more: EU Member States’ Crushing Load: 88 AI Act Tasks That Could Fracture Enforcement
- Read more:
Frequently Asked Questions
What is SB 53 in California?
SB 53 mandates safety standards for advanced AI systems, like testing and disclosures, signed by Newsom to curb risks from powerful models.
Does SB 53 ban AI development?
No — it sets limits and safeguards, not prohibitions. Think seatbelts, not speed bumps.
Will other states pass AI safety laws like SB 53?
Likely yes. With federal inaction, expect a wave from blue and red states alike by 2025.