AI Chatbots Influencing Government Decisions

Imagine tariffs cooked up by a chatbot. That's allegedly what happened in Trump's 2025 trade war — and it's just the start of AI's grip on government.

ChatGPT's Shadow: How AI Shaped Trump's Bizarre Tariffs — The AI Catchup

Key Takeaways

  • AI chatbots like ChatGPT may have directly shaped Trump's 2025 tariffs via simplistic formulas.
  • Officials in Germany, Albania, and UK use AI daily for policy brainstorming, often without safeguards.
  • Risks include biases, hallucinations, and opacity — demanding urgent transparency rules.

What if the next big policy blunder came straight from a chatbot prompt?

AI chatbots influencing government decisions isn’t sci-fi. It’s happening now, with whispers of ChatGPT formulas behind Trump’s April 2025 tariffs — those head-scratching levies slapping poor nations that barely trade with us.

Reverse-engineers spotted it fast. Plug in ‘solve trade deficits,’ and boom: out pops a simplistic equation matching the administration’s numbers. Warnings about risks? Ignored. Applied anyway.

This unproven claim — sure — but it spotlights the peril. Governments crave quick fixes. Chatbots deliver. Too easily.

Did Trump’s Tariffs Echo ChatGPT’s Advice?

Picture this: officials, buried in data, fire off a query. ‘Fix our trade gap.’ AI spits back a formula — basic, risky, but authoritative-sounding. Trump team runs with it, hitting countries like Cambodia with 60% duties despite their tiny imports.

Economists howled. The numbers didn’t add up. Until someone tested LLMs. Gemini, Claude, GPT-4o — all converged on similar math. Trade deficit divided by imports, multiplied by 100 for a ‘tariff rate.’ Nonsensical for poor exporters. But there it was, mirrored in policy.

“Various observers reverse-engineered a formula that would produce these numbers, and it quickly became apparent that this formula was the answer that multiple chatbots give to the question of how to ‘solve’ trade deficits. Despite warnings from the chatbots that the formula was extremely simplistic and came with numerous risks, it seems that the answers were taken and applied by the Trump administration as their official tariffs.”

That’s from AlgorithmWatch’s deep dive. Chilling, right? Not ironclad proof — White House denies it — but the pattern fits a world where leaders lean on AI for ‘insights.’

And it’s not alone.

Albania’s got Diella, an “AI Minister” for public procurement. Sounds efficient. Until you learn her face was swiped from an actress — now suing. High-profile. Visible. But what about the shadows?

German Digital Minister Karsten Wildberger? Admits to 1-2 hours daily with chatbots, ‘structuring his thinking.’ His ministry, though? Claims zero official use. FOI request stonewalled.

Lithuania’s public broadcaster found ministers doing the same. Brainstorming. Summarizing. All casual. All unchecked.

Here’s the thing — these aren’t rogue acts. Politics runs on deadlines, overload, scarce expertise. Why grind through reports when Grok summarizes in seconds?

But chatbots curate. They prioritize ‘consensus’ views from training data — often US-centric, corporate-tilted. Phrase wrong? Bias amplifies. Fine-tuning hides agendas. In government? Compounded errors become law.

Can AI Chatbots Hijack Democracy?

Short answer: yes, subtly. Democratic decision-makers — ministers, advisors — wield power. They ask: ‘Summarize climate policy.’ AI decides what’s ‘key’ — maybe downplaying fossil fuel lobbies if data skews that way.

Or brainstorming: ‘New welfare ideas.’ Outputs neoliberal tweaks over bold reforms. Invisible nudge.

AlgorithmWatch zeroed in on Germany, Switzerland, UK. FOIs, published docs, experiments. Findings? Pervasive but opaque use. No safeguards. Guidelines? Toothless.

Wildberger’s case screams it. Personal tool? Or policy shaper? Boundaries blur.

My take — and here’s the unique angle you won’t find in their report: this mirrors the Oracle of Delphi. Ancient Greeks sought her cryptic advice for wars, laws. Kings listened, blindly. Biases? Priestesses favored allies. Outcomes? Disaster half the time. Today’s chatbots? Modern oracles, trained on internet sludge, whispering to presidents. History says: vet the source, or pay dearly.

Governments aren’t pausing. UK champions AI in civil service. Switzerland tests LLMs for admin. Germany? Wildberger pushes ahead.

Risks stack. Hallucinations in high-stakes briefs. Echo chambers reinforcing bad ideas. Foreign influence if models pull from adversarial data.

Experiments back it. Prompt a bot on ‘EU migration policy’ — get skewed summaries praising strict controls, soft on integration costs. Flip phrasing? Balance shifts.

Officials untrained. Overconfident. One Lithuanian study: ministers love the speed, ignore limits.

Bold prediction: by 2027, a major blunder — say, bot-fueled regulation tanking markets — forces AI literacy mandates. Think driver’s ed for ministers. Fines for unvetted use. Europe’s ahead; US lags, Trump-style.

But hype aside — Trump’s tariffs weren’t pure AI fever dream. Politics gonna politics. Still, denying the influence? Corporate spin, like Big Tech’s ‘tools, not deciders.’ Call it out: they’re oracles with profit motives.

Why Does Government AI Reliance Matter for Markets?

Markets hate surprises. AI-warped policy? Volatility spikes. Tariffs from a prompt? Supply chains scramble. Investors flee.

Data point: post-tariff announcement, S&P dipped 2%. Apparel stocks cratered 15%. If chatbots scale, expect whiplash norms.

Regulators stir. EU AI Act eyes high-risk uses — government qualifies. US? EO on AI safety, but toothless. Watchdogs like AlgorithmWatch push guidelines: log prompts, human review, audits.

Feasible? In cash-strapped bureaucracies? Doubt it without mandates.

Unique insight redux: parallels early spreadsheets. 1980s accountants botched formulas, crashing firms. Governments adapted with training. AI’s worse — opaque black boxes. History demands more.

Steps forward. Transparency laws — expand FOIs to AI logs. Banned models for policy. Third-party audits.

AlgorithmWatch’s prelim work inspires. But scale it. Partner with parliaments. Test red-team prompts.

Ignore at peril. Democracy’s not debugged yet.

**


🧬 Related Insights

Frequently Asked Questions**

Do government officials really use AI chatbots daily?

Yes — German ministers log hours; UK pushes it hard. But ‘official’ use? Often denied, per FOIs.

Did ChatGPT influence Trump’s 2025 tariffs?

Strong circumstantial evidence: formulas match bot outputs. Unproven, but patterns scream influence.

What safeguards stop AI from shaping policy?

Few now. Calls for logs, reviews, training — but enforcement lags.

Sarah Chen
Written by

AI research editor covering LLMs, benchmarks, and the race between frontier labs. Previously at MIT CSAIL.

Frequently asked questions

Do government officials really use AI chatbots daily?
Yes — German ministers log hours; UK pushes it hard. But 'official' use? Often denied, per FOIs.
Did ChatGPT influence Trump's 2025 tariffs?
Strong circumstantial evidence: formulas match bot outputs. Unproven, but patterns scream influence.
What safeguards stop AI from shaping policy?
Few now. Calls for logs, reviews, training — but enforcement lags.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Algorithm Watch

Stay in the loop

The week's most important stories from The AI Catchup, delivered once a week.