X Crypto Scams: New Auto-Lock Feature Explained

X is finally doing something about crypto scams. The problem? It's treating a symptom, not the disease.

Notification alert on X (Twitter) showing account security verification prompt for cryptocurrency post

Key Takeaways

  • X is auto-locking accounts that post about crypto for the first time, but the feature treats symptoms, not the root cause of account hacking
  • Platform-wide security issues—weak authentication, minimal SMS protection, skeletal moderation—remain unfixed despite years of high-profile breaches
  • Scammers will adapt to this friction by targeting existing crypto accounts or using alternative tactics; real security requires mandatory 2FA and active impersonation enforcement

X is locking down crypto posts now.

Elon Musk’s platform is rolling out a new feature designed to strangle the endless parade of hacked accounts pushing worthless tokens. The mechanism is straightforward: if you post about cryptocurrency for the first time ever, your account gets locked and you’ll need to verify your identity. According to X Head of Product Nikita Bier, this should “kill 99% of the incentive” for hackers to compromise accounts. Sounds good. Except it doesn’t actually address why X remains a cesspool for financial scams in the first place.

The feature sounds solid on paper

“We are in the process of implementing auto-locking and verification if a user posts about cryptocurrency for the first time in the history of their account. This should kill 99% of the incentive, especially since Google isn’t doing shit to stop the phishing emails.”

Look, the logic here is defensible. A hacker breaks into your account, and the first thing they want to do is pump some shitcoin to a million followers. With this gate in place, they hit friction immediately—authentication required, momentum killed. Bier even noted the platform will flag accounts with 10K+ followers dropping meme coins out of nowhere. “If you have more than 10K followers and you drop a meme coin without any prior connection to crypto, it is always a hack,” he said.

It’s a targeted intervention. Not bad.

Why this solution is basically a band-aid

But here’s the thing: X still fundamentally sucks at account security, and no amount of crypto-specific guardrails change that. The platform has been ground zero for high-profile hacks for years. Former presidents Obama and Biden. Kanye West. Elon Musk himself. All compromised to push crypto scams. And that was just the headline cases—thousands of ordinary users get torched monthly.

The auto-lock feature assumes the problem is what gets posted when an account is hacked. That’s cute. The actual problem is that accounts are getting hacked in the first place. Until X fixes its underlying authentication (two-factor enforcement, SMS-bombing protections, session management), this feature is just making scammers slightly less lazy. They’ll shift tactics. They’ll find accounts already talking about crypto. They’ll target smaller influencers who don’t trigger the 10K-follower threshold. They’ll use crypto-adjacent language that bypasses the filter.

Scammers don’t stop. They pivot.

The Jonathan tortoise irony speaks volumes

And then there’s the comedic timing. Hours after Bier announced the new safeguards, someone impersonated a news account and convinced major outlets that Jonathan—the world’s oldest living land mammal, a 190-year-old tortoise—had died. The hoax was designed to pump a Solana meme coin. This happened during the discussion about preventing exactly this kind of attack.

Would the auto-lock have caught it? Maybe. Probably. But the real red flag—a fake account impersonating a news organization—had nothing to do with first-time crypto posts. It was basic account impersonation. The kind of thing a platform with functional moderation should catch in hours, not days.

X’s moderation apparatus is skeletal compared to Twitter’s pre-Musk days. Staff cuts mean fewer human reviewers, which means scammers operate with less friction. A crypto-specific filter can’t compensate for that kind of institutional hollowing.

What actually needs to happen

Look, X deserves credit for trying. The feature is better than nothing. But it’s a half-measure. Real security requires:

Mandatory 2FA for accounts over 5K followers. No exceptions. No “I forgot my backup codes.” This stops 80% of basic account takeovers tomorrow.

SMS-bombing protections and rate-limiting. Attackers use automation to crack passwords. Add friction to login attempts and you eliminate the spray-and-pray approach.

Active impersonation enforcement. If someone’s claiming to be @Reuters or @AP without verification, they should be suspended within hours, not days. This requires actual moderation staff, which is the expensive part X wants to avoid.

Restrictions on crypto-promoting accounts. Not bans—restrictions. New accounts, or accounts with sudden engagement spikes pushing crypto, should face amplification caps. Let them exist, but don’t let them reach millions overnight.

None of this is groundbreaking. Facebook, YouTube, and even LinkedIn have some version of these controls. X just hasn’t bothered implementing them at scale.

The broader message

Here’s what bugs me most: this feature announcement is PR theater disguised as security. Bier posts about “killing 99% of the incentive,” news outlets run with it, users feel safer, and X can claim progress. Meanwhile, account takeovers continue. Phishing emails (which Bier himself admits Google isn’t stopping) keep working. Scammers keep evolving.

The company took legal action against some crypto scammers last September. Good. Then they continued hosting their ecosystem with minimal guardrails. The pattern is consistent: reactive band-aids instead of systemic fixes.

Until X treats account security as a platform-wide priority—not a crypto-specific problem—this will repeat. Same scams, slightly different wrapper.


🧬 Related Insights

Frequently Asked Questions

Will X’s auto-lock feature stop all crypto scams?

No. It slows down some attacks targeting hacked accounts, but scammers will adapt. The real issue is weak account security across the platform, not just what gets posted about crypto.

Can scammers bypass the new verification system?

Likely, yes. They’ll target accounts that already post about crypto, use VPNs to bypass geo-restrictions, or use compromised emails tied to recovered accounts. It’s friction, not a wall.

Why doesn’t X just enforce stronger passwords and 2FA?

Mandatory security features cost money (support overhead, UX friction) and reduce engagement metrics (logged-out users). Easier to implement targeted filters and call it a win.

James Kowalski
Written by

Investigative tech reporter focused on AI ethics, regulation, and societal impact.

Frequently asked questions

Will X's auto-lock feature stop all crypto scams?
No. It slows down *some* attacks targeting hacked accounts, but scammers will adapt. The real issue is weak account security across the platform, not just what gets posted about crypto.
Can scammers bypass the new verification system?
Likely, yes. They'll target accounts that already post about crypto, use VPNs to bypass geo-restrictions, or use compromised emails tied to recovered accounts. It's friction, not a wall.
Why doesn't X just enforce stronger passwords and 2FA?
Mandatory security features cost money (support overhead, UX friction) and reduce engagement metrics (logged-out users). Easier to implement targeted filters and call it a win.

Worth sharing?

Get the best AI stories of the week in your inbox — no noise, no spam.

Originally reported by Decrypt

Stay in the loop

The week's most important stories from theAIcatchup, delivered once a week.