EU AI Act GPAI Code of Practice Guide

Brussels drops its voluntary playbook for taming GPAI under the EU AI Act. But after 20 years watching Valley hype, I'm asking: does this actually stick, or is it more paperwork for the lawyers?

EU's GPAI Code of Practice: Self-Regulation Theater or Actual AI Leash? — theAIcatchup

Key Takeaways

  • Voluntary code with 2025-2027 deadlines gives Big Tech wiggle room.
  • Copyright rules target scraping but ignore past training data sins.
  • Systemic risk frameworks sound tough; enforcement's the weak link — expect consultant gold rush.

Rain-slicked streets outside the European Commission’s HQ, and inside, AI lobbyists are already lawyering up over the latest draft.

I’ve chased Silicon Valley snake oil for two decades now — from dot-com bubbles to crypto winters — and this EU AI Act Code of Practice for General Purpose AI models smells like the same old game. Providers get a ‘clear framework’ to comply, sure, but it’s all voluntary. They can pick this code or ‘other appropriate methods.’ Translation: we’ll comply on our terms, thanks.

Why the EU’s Betting on Good Faith (Again)

Look, the rules kick in August 2, 2025, for new models. Fines or model recalls? Not till 2026. Legacy stuff gets till 2027. It’s a grace period, they say — time to cozy up with the AI Office. But here’s my unique take, one you won’t find in the press release: this mirrors the GDPR rollout perfectly. Remember how everyone panicked, hired consultants, then mostly ignored it until the first €20 million fine? Expect the same. A boom in AI compliance firms, pocketing millions, while actual safety lags.

And who signs up? Big players like OpenAI or Google, committing to docs on every EU-distributed model — unless it’s free, open-source, low-risk. Store it for 10 years, hand it over on request. Public summaries encouraged. Noble, right? Except enforcement’s a year out.

Signatories commit to maintaining up-to-date, comprehensive documentation for every GPAI model distributed within the EU, except for models that are free, open-source, and pose no systemic risk.

That’s straight from the doc. Sounds airtight. Feels like vaporware.

Short para for punch: Copyright chapter’s the real minefield.

Does the Copyright Chapter Stop AI Data Vampires?

Web crawlers, meet robots.txt. No scraping paywalled sites, respect rights signals, build filters to avoid spitting out infringing junk. Need a copyright policy, complaint hotline for creators. All to dodge those pesky EU copyright laws — prior auth required, unless TDM exception.

But c’mon. These models were trained on the entire internet already. Retroactively ‘lawfully accessing’ data? That’s like telling a burglar to return the TV and ask permission next time. Providers will tweak terms of service, ban unauthorized use downstream, but the damage is baked in. And that designated contact point? Bet it’ll be a black hole for artist complaints.

Safety and security — for ‘systemic risk’ models — that’s the meat. Frameworks before release, risk IDs via scenarios and experts, analysis with adversarial tests, accept/reject decisions with safety margins. Mitigations everywhere: filters, monitoring, phased rollouts, secure servers.

Then reports. Detailed Model Reports pre-launch, updates forever. Assign blame internally — er, ‘responsibility allocation.’ Report serious incidents.

Sprawling thought here: it’s comprehensive on paper, weaving through lifecycle risks, pulling in external evals, forecasting nasties like jailbreaks or bias bombs — but who verifies? The AI Office, understaffed and two years from teeth. Providers self-assess, self-report. We’ve seen this movie; Enron had better internal controls.

Is Systemic Risk Just Code for ‘Blame the Big Models’?

Commitments 1 through 10 lay it out: build the framework, ID risks, analyze ‘em, decide if acceptable (with margins — smart, that), mitigate safety (refusals, guards), security (no leaks), report it all, allocate oversight roles.

Appendices for templates. Standardized docs on datasets, compute, energy — transparency porn.

Here’s the cynicism: this won’t touch Chinese labs or rogue open-source hackers. It’s EU-market theater, letting US giants say ‘we comply’ while raking ad dollars. Bold prediction? By 2028, first fines hit small fry, not Sam Altman. Consultants win big.

Wander a bit: remember the Aviation safety codes post-crash? They worked because metal bends and regulators crash-test. AI? Opaque boxes. Hard to audit.

But. Safety framework must evolve — new risks, incidents, model tweaks. Ongoing monitoring. Post-market surveillance. That’s the hook — if enforced.

Why Developers Should Care (Beyond the Fines)

Deadlines matter. New models post-2025: comply now. Old ones: 2027 grace. Non-signatories prove it ‘other ways’ — vagueness invites scrutiny.

Three-word warning: Lawyers feast.

For downstream users, docs mean better integration — or lawsuits if you ignore warnings.

Cynical close on this: it’s progress, barely. Strips some PR spin, forces paperwork. But money question? Compliance vendors, not safer streets.


🧬 Related Insights

  • Read more:
  • Read more:

Frequently Asked Questions

What is the EU AI Act Code of Practice for GPAI?

It’s a voluntary framework — with templates — helping AI model makers show they’re following EU rules on transparency, copyright, and risks. Starts 2025, voluntary but smart to follow.

When must GPAI models comply with EU AI Act?

New models from August 2, 2025; existing by 2027. Enforcement from 2026.

Do open source AI models need to follow this code?

No, if free, open, and low systemic risk — but docs encouraged for trust.

Do I need to sign the GPAI Code of Practice?

Nah, voluntary — prove compliance your way. But it’s the easy path.

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

What is the EU AI Act Code of Practice for GPAI?
It's a voluntary framework — with templates — helping AI model makers show they're following EU rules on transparency, copyright, and risks. Starts 2025, voluntary but smart to follow.
When must GPAI models comply with EU AI Act?
New models from August 2, 2025; existing by 2027. Enforcement from 2026.
Do open source AI models need to follow this code?
No, if free, open, and low systemic risk — but docs encouraged for trust.
Do I need to sign the GPAI Code of Practice?
Nah, voluntary — prove compliance your way. But it's the easy path.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by EU AI Act News

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.