Anthropic Billing Issue: Claude Ghost Charges

One Claude Max subscriber sails blissfully unaware—then bam, $180 in phantom charges. Anthropic's response? An AI bot, then silence for a month. Sound familiar?

Anthropic's $180 Ghost Charges: Claude Users Stuck in Billing Hell, Support Vanishes — theAIcatchup

Key Takeaways

  • Claude Max users face $180+ ghost charges from meter glitches, no usage.
  • Anthropic support: AI bot only, ghosts humans for 30+ days.
  • Unique risk: Erodes 'safe AI' trust, potential user exodus to rivals.

Staring at my credit card alert in early March, sailboat rocking gently off San Diego. $180. To Anthropic. For Claude usage? I’d been offline, parents in tow, no laptop in sight.

This wasn’t some rogue API call. Nope. Sixteen ‘Extra Usage’ invoices, $10 to $13 each, crammed into March 3-5. Usage dashboard? Stuck at 100%. Session history? Two blips on the 5th, under 7KB total. Zilch on the 3rd or 4th.

Anthropic billing issue strikes again. You’re not alone if you’re a Claude Max subscriber pulling your hair out over this. GitHub’s lit up—claude-code#29289, #24727—and Reddit’s r/ClaudeCode echoes the nightmare: meters lying through their digital teeth, charges stacking like bad karma.

Why the Hell Are Claude Users Eating $180 Phantom Bills?

Look, I’ve covered enough SaaS debacles in 20 years to spot a classic. Remember AWS’s early billing overruns? Or Stripe’s phantom charges circa 2015? Companies scale fast, meters glitch, and suddenly you’re funding their next round. But Anthropic? They’re the ‘responsible AI’ crew, Claude’s supposed to be the safe bet against OpenAI’s Wild West.

Here’s the user zeroing in: “between March 3-5, I received 16 separate “Extra Usage” invoices ranging from $10-$13 each, all in quick succession of one another. However, I wasn’t using Claude.”

I was away from my laptop entirely and was out sailing with my parents back home in San Diego.

That’s from the original complaint. Punchy proof. No activity, yet the meter spins like a Vegas slot machine. My unique take? This reeks of background daemon leaks—Claude Code sessions idling in some server farm, racking tokens while your machine sleeps. We’ve seen it in early GPT wrappers. Anthropic’s PR spins ‘constitutional AI,’ but their infra’s still beta-rigged for enterprise, not plebs like us Max plan folks.

And who’s cashing in? Not you. Anthropic’s valuation just hit $18B on Amazon’s dime. These glitches? Free money till users revolt.

But wait—it’s worse.

Is Anthropic’s ‘Fin AI Agent’ Support Just a Fancy Dead End?

March 7th. Email drops, detailed as a forensic report. Evidence attached. Boom—reply in two minutes. From Fin AI Agent. ‘File an in-app refund,’ it chirps. Problem? That flow’s sub-only. Extra Usage? Nope. Plus, dude wants answers, not a band-aid.

He pushes for a human. Gets this gem:

Thank you for reaching out to Anthropic Support. We’ve received your request for assistance. While we review your request, you can visit our Help Center and API documentation for self-service troubleshooting. A member of our team will be with you as soon as we can.

March 7. Crickets. Follow-up March 17. Silence. March 25. Nada. April 8—over a month. Still ghosted.

Irony so thick you could cut it with a prompt. Anthropic builds Claude, world’s ‘most capable’ assistant. Their support? An AI wall, no humans in sight. I get AI triage—fine. But AI-only? That’s a moat, not service.

I’ve grilled execs from Salesforce to Slack on this. Scale kills empathy. Anthropic’s betting on volume over velvet gloves. Prediction: as Claude Pro/Max swells, expect a support union or mass exodus to Grok. History says so—look at Unity’s runtime fee fiasco. Users bolt when billing bites.

Dig deeper, though. Forums buzz with patterns. One Redditor: identical 100% meter, no sessions. Another: charges during outages. Anthropic’s dashboard? Opaque as a black box. No granular logs for Max users. Enterprise gets ‘em—why not us?

Cynical me smells cost-cutting. Humans cost $80k/year. AI? Pennies. But when Fin fails, trust evaporates. Claude’s edge was reliability. This? Cracks the pedestal.

What Happens When AI Companies’ Own AI Fails?

Zoom out. Anthropic preaches safety, alignment. Claude won’t hallucinate your grandma’s recipes. But billing? Hallucinating $180? That’s not cute.

Broader ripple: devs on tight budgets. One rogue weekend? Budget blown. Trust erodes, prompts shift to competitors. I’ve seen it—Hugging Face poached loads post-OpenAI rate hikes.

Anthropic, fix this. Publish root cause. (Spoiler: likely token cache bug or idle session multiplier.) Refund proactively. Hire humans—or train Fin to escalate.

Otherwise, you’re just another Valley vaporware promise. Safe AI? Prove it starts at home.

Short-term pain for users. Long-term? If ignored, Anthropic risks ‘billing bully’ rep. Bold call: expect class-action whispers by summer.

The real winners? Perplexity. xAI. Anyone with humans on support.


🧬 Related Insights

Frequently Asked Questions

What is the Anthropic Claude billing issue?

Claude Max users report $100+ in unexpected ‘Extra Usage’ charges despite zero activity, with dashboards showing false 100% usage and phantom sessions.

How do I get a refund for Anthropic billing errors?

Try in-app refund (subs only), email support with evidence—but expect AI replies and month-long waits. No guarantees.

Is Anthropic support all AI now?

Yes, frontline is Fin AI Agent. Humans? MIA after escalations, per reports.

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

What is the Anthropic Claude billing issue?
Claude Max users report $100+ in unexpected 'Extra Usage' charges despite zero activity, with dashboards showing false 100% usage and phantom sessions.
How do I get a refund for Anthropic billing errors?
Try in-app refund (subs only), email support with evidence—but expect AI replies and month-long waits. No guarantees.
Is Anthropic support all AI now?
Yes, frontline is Fin AI Agent. Humans? MIA after escalations, per reports.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Hacker News (best)

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.