AI Bants Substack

AI Bants Substack

Share this post

AI Bants Substack
AI Bants Substack
Deepfake Scams 2025: How to Detect AI Voice Cloning and Video Fraud Before You Lose Money

Deepfake Scams 2025: How to Detect AI Voice Cloning and Video Fraud Before You Lose Money

From Real-Time Video Hoaxes to Synthetic Voices—The Shocking Rise of AI-Driven Fraud and How to Defend Yourself

AI Bants's avatar
AI Bants
Jun 17, 2025
∙ Paid

Share this post

AI Bants Substack
AI Bants Substack
Deepfake Scams 2025: How to Detect AI Voice Cloning and Video Fraud Before You Lose Money
1
Share
AI-generated image simulating a deepfake Zoom call, illustrating how synthetic media can convincingly mimic real people in online scams.
A screen full of fakes needs a sharp eye to spot the scam

It started with a scroll-stopping story

The person on the Zoom call looked exactly like her old colleague. Same face, same mannerisms.

His “business partners” were there too, leaning in, nodding along, watching with concern as he struggled with what he claimed were “technical difficulties.”

Sandy Peng, co-founder at Scroll, hadn’t spoken to this person in years. But then, out of the blue, came the Telegram message:

"Hey, let's catch up."

It was normal enough, in a way. But then again…why now?
Plus, that strange issue with his audio. Hadn’t he instigated the call? Wouldn’t he have checked his audio beforehand?

Even stranger—he was urging her to download a Zoom “enterprise update.”
He was even suggesting he could guide her through it.

That’s when Sandy took a breath and a moment to double-check.
Her Zoom version was already the latest. Only just updated.

It felt like a red flag. Like something was off. And it was—badly.

Turns out every concerned face on her screen, apart from hers, was fake, every glance between the “business partners” synthetic. Because every person on the call was AI-generated.

Sandy had just survived a deepfake attack featuring an entire cast of AI-generated characters. Reading her account on my LinkedIn feed, I felt compelled to dig deeper into this terrifying new frontier.

Share


👇 Need defence tactics urgently? Scroll to the end for a quick, no-nonsense playbook: the red flags, how to spot them, and how to alert the authorities.


The Deepfake Crisis: Why 2025 Is the Tipping Point

The scale of AI voice cloning and synthetic media fraud is exploding at an alarming rate, with fraud attempts increasing by 2,137%, while voice deepfake fraud, specifically, has experienced a 680% rise year-over-year. What’s more, nearly 40% of high-value fraud cases now involve deepfake technologies.

What makes 2025 the tipping point? These three critical factors:

Real-time deepfake generation is now possible with consumer-grade hardware. Scammers can create convincing video calls in real-time, responding to questions and maintaining character consistency throughout extended conversations.

Voice cloning requires only 3-5 seconds of audio from social media posts, voicemails, or public speaking engagements. Your voice can be weaponised from a single Instagram story.

68% of deepfakes are now "almost indistinguishable from authentic media" according to detection specialists, making traditional verification methods obsolete.

Geographic Hotspots: Where Synthetic Media Fraud Is Exploding

The threat spans continents, with distinct regional patterns:

  • North America: 38% of incidents, primarily targeting corporate executives and public figures

  • Asia-Pacific: 27% of incidents, driven by fraud in China, India, Southeast Asia, and Australia

  • Europe: 21% of incidents, often involving cross-border cryptocurrency schemes

  • Africa: 14% of incidents, with Nigeria showing a sevenfold increase in deepfake attempts driven by both romance scams and financial fraud

  • Cross-border operations: 63% of cases involve international criminal networks operating across multiple regions

Anatomy of Modern AI-Generated Fraud

The $25 Million Corporate Heist

The Arup engineering firm case remains the gold standard of deepfake corporate fraud. An employee received video calls from what appeared to be the company CFO and several colleagues. Multiple "board members" participated in urgent discussions about fund transfers. Over several sessions, the employee authorised $25 million in transfers—all to AI-generated deepfakes operating from Hong Kong.

Voice Authentication Security Breakdown

A UK energy firm lost €220,000 after scammers cloned the CEO's boss's voice using just 3-5 seconds of audio from a conference call recording. The synthetic voice perfectly replicated the German accent, speech patterns, and even the executive's habit of clearing his throat before important announcements.

Romance Scams Enter the AI Era

Beth Hyland's year-long relationship with "Richard" cost her $26,000. The scammer used real-time deepfake technology during Skype calls, maintaining visual consistency while adapting to unexpected questions. The emotional manipulation was enhanced by AI-generated responses that felt authentic and caring.


AI Bants' Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.


The Yahoo Boys: Nigeria's Deepfake Romance Empire

From Email Scams to AI Romance Fraud

The most sophisticated deepfake romance operations trace back to Nigeria's "Yahoo Boys"—a loose network of cybercriminals who transformed from crude email scams into AI-powered relationship fraud. These were no amateur operators either. At the height of their online fraud escapades they reached an industrial-scale of high deception, with their romance scams netting over $650 million in 2024 alone. But how did they do it?

The Two-Device Deepfake Setup

The Yahoo Boys perfected real-time face-swapping during video calls using a two-device setup, where one smartphone conducts the video call with the victim, while a second device runs face-swapping software that overlays a digital mask matching their fake dating profile. Ring lights and stabilised camera stands ensured the deepfake appears clear and convincing throughout extended conversations.

The $850,000 Brad Pitt Scam

Their most notorious case involved three Yahoo Boys who defrauded a French woman named Anne out of $850,000 by impersonating Brad Pitt. They sent AI-generated images of the actor in a hospital bed, claiming he needed financial assistance for cancer treatment and couldn't access his accounts due to his divorce from Angelina Jolie. Anne even left her wealthy husband, believing she was in a relationship with the Hollywood star. Never underestimate the power of self-delusion. It’s what scammers count on most.

Deepfake-as-a-Service Operations

Today's synthetic media fraud has all of the attributes of a solid business model with tiered pricing structures in place, and assured profits. But it doesn’t come cheap:

  • $300/minute for custom deepfake video generation

  • $150 for real-time voice cloning during calls

  • $50,000/month for enterprise phishing suites with AI integration

These services largely operate through encrypted Telegram channels, accepting cryptocurrency payments and providing 24/7 technical support to criminal clients worldwide.

State-Sponsored Disinformation

Beyond financial fraud, deepfakes are weaponising political discourse globally. Russian-aligned groups deploy synthetic media through their "Doppelganger" campaign, impersonating outlets like The Guardian and Der Spiegel to spread disinformation and erode support for Ukraine across Europe.

China's Cognitive Warfare Campaign

China has ramped up AI-powered "cognitive warfare" against Taiwan, with over half a million controversial messages detected in 2024 alone. During Taiwan's 2024 presidential election, sophisticated deepfake videos showed candidates making fabricated statements, spreading rapidly across Facebook and YouTube.

Global Democratic Interference

The threat extends globally—India's 2024 elections featured deepfake videos of deceased politicians, while European Parliament elections faced coordinated attacks from Russian-linked networks creating dozens of fake news sites. This industrialised disinformation represents a fundamental threat to democratic discourse, where authentic political communication becomes increasingly indistinguishable from synthetic manipulation.

The Weaponisation of Truth

Perhaps most concerning is how state actors systematically blur the lines between authentic and synthetic content. Authoritarian regimes are increasingly using AI to monitor, target, and silence activists while undermining democratic processes, creating an environment where citizens struggle to distinguish reliable sources from fabricated ones. This deliberate confusion erodes public trust in legitimate journalism, making populations more susceptible to propaganda.

Share

Detection Technology: The Arms Race Against AI Deception

Leading AI Detection Tools

Sensity AI leads commercial detection with 95-98% accuracy, monitoring over 9,000 sources and identifying more than 35,000 malicious deepfakes annually. Their platform analyzes facial geometry, lighting inconsistencies, and temporal artifacts invisible to human observers.

Pindrop Security specialises in voice authentication security, detecting synthetic voices in 0.14 seconds with 99% accuracy. Their system analyzes vocal tract characteristics, breathing patterns, and micro-expressions that AI struggles to replicate consistently.

BioID specialises in liveness detection technology that analyzes blood flow patterns and micro-movements to verify real human presence, achieving over 98% accuracy in detecting deepfakes and synthetic faces during authentication.

🔒 What to Do If You Suspect an AI-Powered Scam

Deepfake scams create urgency and bypass your usual checks. Here’s how to spot them, verify what’s real, and act fast when something feels off.

Keep reading with a 7-day free trial

Subscribe to AI Bants Substack to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 AI Bants
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share