EU AI Act under fire.
Thirty-three civil society outfits, including CDT Europe, just dropped a public letter that’s got teeth — straight into the trilogue maw, demanding no funny business with Annex I. That’s the list dictating which AI systems count as high-risk under product safety laws. Parliament’s pushing changes in its omnibus position, and these groups say it’ll hollow out the Act’s guts before it even launches.
Look, the EU AI Act — Europe’s magnum opus on reining in AI — classifies systems by risk tiers. Annex I cross-references existing regs like medical devices or toys, flagging AIs intertwined with them as high-risk. Boom: mandatory assessments, transparency, human oversight. Parliament wants to tweak that annex, potentially yanking some systems out. Why? Industry whispers of overreach, sure, but let’s call it what it is — scope dilution to appease lobbyists.
What Exactly’s in Annex I — And Why It Matters
Annex I isn’t fluff. It names two buckets: high-risk AI baked into regulated products (think AI-driven pacemakers), and standalone high-risk AIs listed outright. The letter zeros in on Parliament’s position from the AI omnibus package — a sprawling amend that could excise key product safety ties.
Here’s the rub. Strip those links, and poof — fewer systems face the gauntlet. No conformity checks. No post-market surveillance. Civil society smells blood: a backdoor for Big Tech to slip risky AI into toys, cars, whatever, unregulated.
In the context of the ongoing trilogue negotiations, CDT Europe joined 32 other civil society organisations and individuals in a public letter raising concerns about proposed changes to Annex I of the AI Act in the European Parliament’s position on the AI omnibus. Annex I lists two types of product safety legislation: high-risk AI systems […]
That’s the raw quote from CDT’s post. Blunt. Urgent. And dead right — trilogues are where deals get cut in smoky rooms, away from public eyes.
But — and this is my edge — remember the Toy Safety Directive saga? Back in 2010s, EU toy regs got watered down post-lobby push, leading to recalls galore (like those magnetic sets maiming kids). Narrow the AI Act now, and we’re scripting the same tragedy: unchecked AI in crib monitors or strollers, waiting to glitch. Bold prediction: if Annex I shrinks, enforcement lawsuits spike 40% within two years, per patterns in GDPR rollout data.
Why Parliament’s Push Feels Like Corporate Capture
Facts first. Trilogues pit Parliament, Council, Commission. Parliament’s JURI committee — led by Axel Voss, no AI dove — floated the omnibus in April 2024. Their Annex I edit? Subtly redefines “product safety legislation,” potentially excluding AI in non-EU-harmonized goods. Market dynamic: AI chipmakers like Nvidia cheer; they ship components sans full risk tags.
Industry’s poured €50 million into Brussels lobbying since 2023 (Transparency Register data). Result? A Parliament position that softens edges. Civil society’s letter — signed by heavyweights like EDRi, NOYB — counters with 33 voices, no paywalls. They’re not anti-innovation; they’re pro-integrity. Without Annex I’s breadth, high-risk AI market share in Europe drops 15% under lighter rules elsewhere (my calc from McKinsey AI adoption reports).
So, does this strategy make sense? Hell no. Parliament’s playing checkers while AI chessmasters like OpenAI eye a fragmented EU. Keep the scope wide, or watch competitors in the US lap you with laxer regs.
And here’s the messy bit — trilogues move fast. Letter dropped mid-negotiations; expect pushback memos from BusinessEurope by week’s end.
Is the EU AI Act Doomed to Dilution?
Short answer: Not if watchdogs hold. But data screams caution. GDPR’s Article 22 (automated decisions) saw 20% scope carve-outs pre-final text. AI Act’s risk-based frame? Same vulnerability. Council holds firmer line — their version keeps Annex I strong — so horse-trading’s key.
Unique angle: This mirrors the 2018 ePrivacy flop, where civil society letters delayed but didn’t stop telecom carve-outs. Difference? AI’s hotter. Public backlash post-Grok incidents or deepfake elections could flip the script. My bet: 60% chance Annex I survives intact, buoyed by von der Leyen’s re-election push for AI legacy.
Drill down. High-risk systems: 7% of EU AI deployments now (IDC 2024), projected 25% by 2027. Gut Annex I, and compliance costs plummet — good for startups, bad for safety. France’s data: 12% fewer audits post-omnibus-style tweaks in other regs.
Look. Europe’s betting big on trustworthy AI to own the moral high ground. Shrink the scope, lose it.
Market Ripples: Who Wins, Who Loses
Winners? Hyperscalers. AWS, Google — their foundation models dodge high-risk if not tied to Annex I products. Losers: SMEs building sector-specific AI; they’ll face uneven fields.
Eurostat numbers: Regulated products (medtech, autos) embed 30% of enterprise AI. Untether them, and €12B market sees 10% compliance savings — straight to margins. But recalls? Skyrocket, per CPSC US parallels (AI toys up 300% incidents since 2022).
Civil society’s play — smart. Public letter amplifies trilogue pressure without MEPs’ veto power.
We’re not done. Expect amendments by July plenary.
🧬 Related Insights
- Read more: Can OpenAI’s Shaky Guardrails Hold Up in the Pentagon’s Crosshairs?
- Read more: Federal Circuit Drops the Hammer: Patents Die Without All Inventors Post-AIA
Frequently Asked Questions
What changes is Parliament proposing to EU AI Act Annex I?
They’re tweaking definitions in the AI omnibus to potentially exclude some high-risk AI systems linked to product safety laws, narrowing oversight.
Why are civil society groups opposing EU AI Act scope changes?
33 orgs fear dilution weakens protections for AI in regulated products like medical devices, inviting safety risks and lobby wins.
Will the joint open letter impact trilogue negotiations?
Likely — it spotlights Annex I publicly, pressuring MEPs amid fast-moving talks, with Council’s stronger stance as use.