Overloaded.
That’s the EU AI Act’s governance in a nutshell — or should I say, a never-ending spreadsheet.
Claudio Novelli and his brain trust — Philipp Hacker, Jessica Morley, Jarle Trondal, Luciano Floridi — drop a no-nonsense breakdown of how the Commission should (finally) make this thing work. Their paper, fresh on SSRN, sticks to the script: explain the framework, spit recommendations for uniform enforcement. No fluff. Policymakers, take notes — or don’t, and watch chaos unfold.
Look, the AI Act’s been law since 2024. But implementation? Still crawling. The Commission’s got a laundry list of tasks, from comitology dances with Member States to churning out guidelines on everything under the sun. Delegated acts. Implementing acts. AI Office wrangling. It’s like they handed Brussels a Swiss Army knife with half the blades missing.
Commission’s Mountain of Mandates
Start with procedures. Commission teams up with the shiny new AI Office and AI Board to birth those acts. Comitology? Yep, Member States get their say. Then EP and Council scrutiny. Rinse, repeat.
Guidelines next — defining what counts as an AI system, high-risk classifiers, risk assessments. Oh, and rules for “significant modifications.” Fine-tuning a foundation model? Probably safe, say Novelli et al., unless you’re stripping safety rails like a reckless teen hot-rodding dad’s car.
Classification hits hard. Updating Annex III for high-risk use cases. Slapping “systemic risk” on GPAI models based on FLOPs or capabilities. Dynamic tweaks to thresholds? Essential, they argue, as AI shrinks in compute but swells in power.
Prohibited systems get guidelines too — manipulative tricks, biometric bans (with law enforcement loopholes). Harmonized standards for high-risk providers’ risk management. Technical docs. Codes of practice.
Transparency mandates across the value chain. Enforcement interplay with other EU laws. Sandboxes. Penalties that actually bite.
It’s exhaustive. Exhausting.
Within the framework of risk assessments, the Commission is yet to define rules about “significant modifications” that alter the risk level of a system once it has been introduced to the market (Art 43(4) AIA). Novelli et al. expect that standard fine-tuning of foundation models should not lead to a substantial modification, unless the process explicitly involves removal of safety layers or other actions that clearly increase risk.
That’s their gem on mods. Smart call — borrow from medicine’s change management plans. Outline tweaks upfront, assess impacts. Prevents marketplace mayhem.
Will GPAI Classification Ignite Courtroom Fires?
Here’s the powder keg: General-Purpose AI (GPAI). Commission holds the classify-or-not hammer under Art 51. Systemic risk? Triggers red-teaming, full risk audits, incident reports, cyber armor (Art 55). Scientific Panel can flag; Commission decides.
But contest rights? Art 52 lets providers fight back. Novelli et al. predict battle royale. Models under 10^25 FLOPs deemed systemic? Hello, Court of Justice of the EU. Big Tech’s lobbyists are salivating.
And those dynamic parameters — adjusting FLOPs, benchmarks as models efficient-ify? Vital for robustness, per the paper. But it’ll be whack-a-mole regulation. Remember GDPR’s early days? Fines pinged startups while Google shrugged. History rhymes: small fry comply, giants litigate.
My twist? This isn’t just oversight. It’s a prediction: by 2026, we’ll see the first CJEU showdown over a “small” GPAI like an optimized Llama variant. Commission’s agility sounds noble — until it morphs into arbitrary rulemaking. Providers will cry foul, Member States bicker, AI Office drowns in appeals.
AI Office: Savior or Paper-Pusher?
Enter the AI Office — central hub for coordination. With the AI Board (Member State reps, experts), it’s meant to enforce uniformity. Novelli pushes for clear enforcement rules, sandbox standards, penalty harmony.
Skeptical me sees red flags. The Act’s vague on Office powers. Will it muscle Big Tech or wilt under pressure? Past EU efforts — think DSA enforcement stumbles — suggest it’ll prioritize optics over teeth.
Enforcement’s the kicker. Clarify AIA overlaps with GDPR, DSA. Set supervisory sandboxes. Ensure penalties deter, not just decorate.
But here’s the rub: Member States set fines. Effective? Proportionate? We’ll see — or not, if national politics hijack it.
The paper nails it: uniform execution demands muscle. Yet, without teeth, it’s theater.
And prohibited AI? Guidelines on manipulative practices, real-time biometrics exceptions for cops. Standards to counter hazards. Necessary, but ripe for loopholes — national security trumps all, every time.
High-risk obligations? Harmonized standards, Article 9 risk systems, Annex IV docs. Approve codes of practice. Providers throughout the chain disclose.
It’s a blueprint. Flawed one.
Why This Matters — Or Doesn’t
Novelli et al. don’t hype. They reiterate: AI Act needs coordinated push. But my acerbic take? Europe’s risk-based approach is clever cover for doing too little, too late. US charges ahead with models; China builds unchecked. EU? Debates FLOPs while innovation flees to friendlier shores.
Unique angle: parallel to Y2K hype. Remember the panic, trillions prepped, then… crickets? AI Act’s governance frenzy risks the same — overprep for yesterday’s threats, underguard tomorrow’s.
Bold call: If Commission fumbles GPAI thresholds, expect a 2027 regulatory reset. Providers bolt; enforcement crumbles.
Dry humor aside, policymakers ignore this at peril. Novelli’s map is gold — if anyone’s listening.
🧬 Related Insights
- Read more: EU AI Act Slams Staffing Firms: Deployers, Not Just Vendors, on the Hook
- Read more: What If ‘Guilty’ Verdicts Upset Criminals? UK’s Wild Satire Exposes Justice’s Soft Underbelly
Frequently Asked Questions
What tasks does the Commission have for EU AI Act implementation?
Churn acts, guidelines, classifications — from high-risk Annex III updates to GPAI systemic risk calls and prohibited AI rules.
How will GPAI models be classified under the AI Act?
Commission decides on systemic risk via FLOPs, capabilities; adjustable params, contestable in court.
What are significant modifications in AI Act risk assessment?
Changes upping risk levels, like stripping safety; standard fine-tuning usually exempt, per Novelli et al.