Fraud Detection 2026: AI, Deepfakes & Dynamic Auth

Two thousand payments leaders gathered in Vegas this week to confront an uncomfortable truth: fraud has become faster, smarter, and harder to catch. The strategies that once worked are already obsolete.

Conference room with 2000+ payments leaders discussing fraud trends and AI security at MRC Vegas 2026

Key Takeaways

  • Dynamic authentication based on user behavior reduces fraud 30%+ while improving conversion by eliminating friction for trusted customers
  • Agentic commerce requires fraud detection embedded in payment infrastructure itself, not evaluated after transactions—traditional rules-based systems can't adapt fast enough
  • Deepfakes and synthetic identities are now trivial to create; effective defense requires layered anomaly detection across multiple verification sources simultaneously

Two thousand payments leaders gathered in Vegas this week to confront an uncomfortable truth: fraud has become faster, smarter, and harder to catch.

Traditional fraud detection relied on simple rules. If something looked weird, block it. Problem solved. Except it wasn’t. Because for every rule you write, a fraud team figures out how to circumvent it. And the real cost—the legitimate transactions you accidentally reject—compounds quietly until your churn metrics start screaming.

What emerged from the Merchant Risk Council conference wasn’t a silver bullet. It was something more pragmatic: three distinct shifts in how the smartest fintech companies are rethinking their entire approach to payments security in an age of AI agents, deepfakes, and automated bad actors.

Why One-Size-Fits-All Fraud Rules Are Dying

Airbnb’s Roberta Del Monte Radford made a deceptively simple observation during her Vegas session: “If we have high-trust velocity, why would we put that entity through friction?”

Sound obvious? It shouldn’t be. Most payment systems treat every transaction the same. Every user, every order, every channel gets the same authentication gauntlet. It’s fair. It’s uniform. And it’s hemorrhaging money.

“We’ll reserve the friction package to the 1% of the traffic that actually is proven to be risky.” — Roberta Del Monte Radford, Airbnb

That 1% distinction matters. A false positive—a declined transaction from a real customer—doesn’t just lose you this sale. It erodes lifetime value. It triggers support tickets. It kills word-of-mouth. The actual cost of friction is almost never calculated into fraud budgets, but it should be.

What’s replacing the blunt instrument? Dynamic authentication based on behavioral profiling. Systems that build a historical picture of each user—their spending patterns, geography, device fingerprints, velocity—and use that data to make split-second decisions about when to ask for additional verification and when to let the transaction sail through.

Stripe’s approach here is worth noting because it’s not new thinking, just better execution. Their Radar adaptive 3DS uses machine learning to flag only transactions that genuinely look unusual. The result: over 30% fraud reduction on eligible transactions. That’s massive. And more importantly, it’s not coming at the expense of conversion—it’s actually improving it because good customers aren’t getting randomly blocked.

What Happens When Fraud Detection Meets AI Agents?

Here’s where things get genuinely complicated.

Ashley Furniture runs a byzantine fraud operation that would make most fintech companies weep. They sell both fast-ship items (days) and custom orders (30+ days), each requiring different authorization cycles, different risk profiles, different rules. A human team somehow manages this. But when the company launched agentic commerce—letting AI agents autonomously make purchases on behalf of customers—their entire fraud infrastructure collapsed.

Kyle Dorcas, Ashley’s head of product management, was blunt about it:

“Rule-based fraud detection was not going to be sufficient. In order to combat fraud, detection really has to be in the payment fabric.”

This is the fundamental architectural problem nobody in payments is talking about openly enough. Fraud detection systems were built to evaluate transactions after they happen. A purchase comes in, the system runs it through rules, a decision gets made. Rejected or approved. Done.

But with agentic commerce, there is no “after.” An AI agent initiates a transaction with zero human oversight. If fraud detection isn’t embedded inside the payment system itself, if it can’t evaluate risk in real time and even modify the transaction or add friction mid-flow, it’s already too late.

Payment processors are scrambling to solve this. Stripe’s Shared Payment Tokens let agents initiate purchases without exposing sensitive data, while simultaneously feeding real-time risk signals back through the system. Card testing likelihood. Disputed fraud probability. Issuer decline risk. The system can adapt in milliseconds.

Yet here’s the uncomfortable part: most payment infrastructures weren’t built for this. They’re patching. And patches fail under stress.

Deepfakes Have Made Identity Verification Nearly Impossible

H&R Block’s Gordon Sheppard did something unsettling at Vegas: he demonstrated how trivial it is to create a perfect synthetic identity.

One still photo. Thirty seconds of audio. Twenty minutes on a laptop. He generated video of himself—fluent Mandarin, Italian, Russian, all perfect—speaking in ways he’d never actually spoken. The technology is mainstream now. It’s not locked away in research labs.

The implication should keep every compliance officer awake at night: traditional identity verification is dead.

Not dying. Dead. Because any single document check is defeatable. A fake driver’s license can be perfect. A fake bank statement can pass. A deepfake video can fool liveness checks. But here’s what fraudsters still can’t fake perfectly: the weird little details.

Gordon showed an example where a fraudulent license was flawless except for one thing: the expiration date didn’t match the authoritative data source. Just that. One tiny misalignment. And the whole thing fell apart.

The new fraud philosophy is multilayered anomaly detection. No single verification is sufficient. You need behavioral checks and document verification and liveness detection and cross-referencing against authoritative databases and pattern recognition for the telltale signs of synthetic identity. Layer them. Trust none of them individually.

Stripe’s Identity product programmatically confirms identity using machine learning across multiple verification methods simultaneously. It’s not foolproof—nothing is anymore—but it’s harder to fake than the alternative.

The Bigger Pattern Nobody’s Discussing

What ties these three trends together isn’t a new technology. It’s a fundamental mindset shift from prevention to adaptation.

Old fraud strategy: build a bigger wall. Write more rules. Block more edge cases. Hope the wall holds.

New fraud strategy: assume the wall will be breached. Instead, embed countermeasures throughout the system. Make friction surgical. Make detection real-time. Make identity verification layered.

It’s more expensive. It’s more complex. And it works better because it’s not fighting an arms race where rules become obsolete the moment they’re deployed.

The question for every fintech company now: are you still building walls, or are you building systems that can bend without breaking?


🧬 Related Insights

Frequently Asked Questions

What is dynamic authentication in fraud detection?

Dynamic authentication adjusts security requirements based on user behavior and transaction risk. Low-risk, trusted users get frictionless transactions. High-risk transactions trigger additional verification steps. It improves both security and conversion by treating different users differently instead of applying identical rules to everyone.

Why is rule-based fraud detection failing for AI agents?

Rule-based systems evaluate transactions after they occur. With agentic commerce, AI agents complete purchases autonomously with no human review. Fraud detection must be embedded in the payment system itself to catch fraud in real time, not after the fact. Traditional sequential fraud review doesn’t work at agent speed.

Can deepfakes and synthetic identities actually fool fraud systems?

Yes. AI-generated IDs, documents, and video liveness checks can be convincing. But perfect forgeries are hard—fraudsters almost always slip up on small details like expiration dates or signature patterns. The solution is multilayered verification using anomaly detection across multiple sources simultaneously, not relying on any single identity check.

Priya Sundaram
Written by

Hardware and infrastructure reporter. Tracks GPU wars, chip design, and the compute economy.

Frequently asked questions

What is dynamic authentication in fraud detection?
Dynamic authentication adjusts security requirements based on user behavior and transaction risk. Low-risk, trusted users get frictionless transactions. High-risk transactions trigger additional verification steps. It improves both security and conversion by treating different users differently instead of applying identical rules to everyone.
Why is rule-based fraud detection failing for AI agents?
Rule-based systems evaluate transactions after they occur. With agentic commerce, AI agents complete purchases autonomously with no human review. Fraud detection must be embedded in the payment system itself to catch fraud in real time, not after the fact. Traditional sequential fraud review doesn't work at agent speed.
Can deepfakes and synthetic identities actually fool fraud systems?
Yes. AI-generated IDs, documents, and video liveness checks can be convincing. But perfect forgeries are hard—fraudsters almost always slip up on small details like expiration dates or signature patterns. The solution is multilayered verification using anomaly detection across multiple sources simultaneously, not relying on any single identity check.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Stripe Blog

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.