Publishers are drawing a line.
The Make it Fair campaign hit UK newsstands Tuesday, uniting national heavyweights like The Times with scrappy local rags in a blitz against AI’s voracious data hunger. Picture this: every weekly newspaper from Cornwall to the Cairngorms running the same stark message—generative AI models gobbling up journalism without a penny or a nod. It’s timed perfectly, slamming shut as a government consultation on AI and copyright wraps up. No coincidence there.
And here’s the kicker—they’re mobilizing readers to badger their MPs. Flood the inboxes, they say. Because if this “content theft”—their words—goes unchecked, trusted newsrooms crumble.
What Sparked This United Front?
Never before have UK publishers spoken with one thunderous voice. David Dinsmore, News UK’s COO, nailed it on Times Radio: “I don’t think I’ve ever seen newspapers editorialised with one voice – this is a first.” That rarity? It’s born from desperation. AI labs like OpenAI and Google have built empires on scraped web data, training models that spit out summaries rivaling original reporting. Publishers see their life’s blood—investigative scoops, eyewitness accounts—funneled into black-box behemoths for free.
But dig deeper. This isn’t knee-jerk rage. It’s architectural warfare. Generative AI thrives on scale; without vast, uncompensated troves of human creativity, those large language models sputter. Publishers argue their content isn’t just fodder—it’s the scaffolding. Strip away payment, and who funds the next Watergate?
Owen Meredith, News Media Association CEO, cuts through the noise:
“We already have gold-standard copyright laws in the UK. They have underpinned growth and job creation in the creative economy across the UK – supporting some of the world’s greatest creators – artists, authors, journalists, scriptwriters, singers and songwriters to name but a few.
“And for a healthy democratic society, copyright is fundamental to publishers’ ability to invest in trusted quality journalism.”
Spot on. Yet the government’s consultation—kicked off December 17—flirts with easing access for AI devs to “high-quality material.” Translation: more scraping, less scrutiny.
Why Now? The Consultation Trap
Look, the timing reeks of strategy. As responses pour in, publishers amplify the chorus: don’t gut laws that work. Meredith again: “The only thing that needs affirming is that these laws also apply to AI, and transparency requirements should be introduced to allow creators to understand when their content is being used. Instead, the government proposes to weaken the law and essentially make it legal to steal content.”
Weakening? That’s the fear. The consultation weighs balancing creative industries against AI innovation. But publishers smell a tilt—prioritizing tech titans over homegrown talent. It’s Napster 2.0, only instead of Metallica suing fans, it’s tabloids suing algorithms. (Remember 2000? Labels crushed file-sharing by demanding royalties; here, publishers eye a similar pivot, forcing AI firms to license datasets.)
My unique angle: this foreshadows a transatlantic split. While US courts greenlight fair use for AI training (see recent New York Times suit), UK’s stricter regime could birth Europe’s first AI content tax. Bold prediction—watch France’s music push bleed into news; by 2026, mandatory opt-ins reshape model training.
A single stark ad. Then pages of persuasion.
How AI Scraping Actually Works (And Why It Hurts)
So, how does this theft happen? Crawlers—think Common Crawl on steroids—vacuum the web, ingesting billions of pages. No robots.txt respect, sometimes; other times, it’s “fair use” dodge. Models learn patterns: syntax from editorials, facts from beats. Output? A “summary” that nixes clicks back to source.
Publishers lose twice. Traffic evaporates—why read when ChatGPT condenses? Ad revenue? Poof. And the existential bit: if AI apes journalism well enough, why pay reporters? It’s not hype; data backs it. Similar pushes in Australia forced Google to pay; UK wants in.
But here’s the rub—government whispers of “text and data mining exceptions.” Echoes EU’s 2019 carve-out, now under fire. Publishers counter: gold-standard laws built the BBC, nurtured Rowling. Don’t trade that for Silicon Valley scraps.
They’re not alone. 1,000 musicians dropped a “joint album” of empty studios—symbolic silence for silenced creators. Cross-industry fury.
Will Make it Fair Sway MPs?
Short answer: maybe. Public pressure works—Brexit proved it. With week-long saturation, MPs face constituent heat. Yet AI lobby’s deep-pocketed. DeepMind’s UK roots, Cambridge clusters—innovation hubs plead for data freedom.
Critique the spin: government’s “support both sides” rhetoric masks favoritism. Creative economy? 5% GDP. AI? Future unicorn factory. But at what cost? If publishers win transparency—opt-outs, royalties—it forces architectural shifts. Models train narrower, perhaps ethically sourced. Or fragment: paywalled data gardens bloom.
This campaign’s genius? Grassroots via newsprint. Digital ads fade; local rags endure trust. It’s analog rebellion in a pixel world.
Devastating, if unchecked.
🧬 Related Insights
- Read more: Gig Workers Strap Phones to Heads, Filming Laundry for Robot Overlords
- Read more: Agentic AI’s 2026 Scale-Up Nightmares: 5 Roadblocks Killing Prototypes
Frequently Asked Questions
What is the Make it Fair campaign?
A unified UK publishers’ push against AI scraping content without permission or payment, running ads and urging public letters to MPs.
Does UK law allow AI to use news content for free?
Current copyright is strong, but the government consultation eyes exceptions for AI training—publishers say no way.
Will AI companies have to pay publishers like in Australia?
Possibly; this campaign aims to force licensing deals, mirroring deals that netted Aussie media $100M+ annually from Google.