From fake nudes and voice clones to executive impersonation, the new scam economy runs on synthetic media — and the best defense now is verification, not instinct.
Introduction: The Scam Is No Longer Theoretical
In early 2024, Hong Kong police described a case in which an employee joined what appeared to be a confidential video meeting with the company’s CFO and other staff, approved transfers to five bank accounts, and lost about HK$200 million. Police said the “meeting” was a pre-recorded deepfake built from public video clips and voice samples. That case matters because it proves the core point: criminals do not need perfect AI. They need believable AI plus pressure.
That is now a global cybercrime pattern, not a one-off headline. INTERPOL has warned that deepfakes are being used to defraud and extort victims through impersonation scams, online sexual blackmail, and investment fraud, while the FBI says synthetic content is being used to facilitate crimes including fraud and extortion.
What Deepfakes Are — and Why Criminals Use Them
eSafety defines a deepfake as a digital photo, video, or sound file of a real person that has been created with AI to produce an extremely realistic but false depiction of them doing or saying something they never did or said. Those systems often draw on real photos, recordings, and clips, which is why public-facing content can become raw material for abuse.
For criminals, AI does not replace classic social engineering. It upgrades it. The FBI says AI increases the speed, scale, and automation of phishing and impersonation attacks, while voice and video cloning make urgent requests feel more believable to individuals and businesses alike.
How Criminals Use Deepfakes for Extortion and Coercive Fraud
1. Sextortion and Fake Explicit Content
One of the most damaging uses of AI is the creation of fake sexual images or videos to humiliate, threaten, or blackmail a target. The FBI says criminals use generative AI tools to create pornographic photos of victims for sextortion schemes, and eSafety treats digitally altered or AI-generated intimate images as image-based abuse when they are shared or threatened without consent.
This is not a niche problem. eSafety says pornographic material makes up the overwhelming majority of deepfake content online and that women and girls are disproportionately targeted. NCMEC has also warned that AI-generated sexual imagery is being used against children, including in sextortion cases designed to coerce victims into sending more content or money.
2. Voice-Cloned Emergency Scams
Voice cloning lets criminals imitate a child, partner, boss, or colleague with very little source material. The FTC warns that a scammer can clone a loved one’s voice from a short clip posted online, then use that cloned voice to demand urgent money in a fake emergency.
What makes these calls dangerous is not just realism. It is timing. The FTC says scammers rely on panic, urgency, and secrecy, pushing victims to wire money, send cryptocurrency, use payment apps, or buy gift cards before they verify the story.
3. Executive Impersonation and Payment Fraud
Businesses are facing the same playbook at a higher price point. In the Hong Kong case, authorities said the victim was lured by a phishing email into a group video call featuring a fake CFO, then authorized transfers totaling roughly HK$200 million. Police said the deepfake was assembled from downloaded public clips and voices, and there was no real live interaction.
The threat goes well beyond one finance team or one country. The FBI and allied cyber agencies say AI-generated media can be used to impersonate corporate officers, enable fraudulent communications, and trick staff into authorizing transactions or handing over sensitive information.
4. Reputation Attacks and Coercive Blackmail
Not every deepfake attack is about an immediate payout. Some are designed to damage credibility first and extract money, silence, or compliance second. INTERPOL explicitly links deepfakes to online sexual blackmail, while eSafety notes that fake intimate content can still cause real-world shame, harassment, and coercive control even when the depicted event never happened.
Why These Scams Work
Deepfake crime works because it hijacks trust faster than most people can think. A familiar voice, an executive face on a screen, or an explicit image that looks real can push victims into fear before they slow down and verify anything. The FTC’s advice on family-emergency scams still captures the logic perfectly.
“Don’t trust the voice. Call the person who supposedly contacted you and verify the story.”
That rule scales from households to boardrooms. The medium may be AI-generated, but the underlying attack is still social engineering: urgency, authority, secrecy, embarrassment, and pressure.
The Red Flags Most People Miss
If an “emergency” or “confidential request” arrives with pressure to act now, keep it secret, change payment details, move to a different platform, or pay in crypto, wire transfer, payment app, or gift cards, treat it as hostile until proven otherwise. Those are classic scam signals, and agencies now warn that AI can make them look or sound more convincing.
Another red flag is the illusion of presence. In the Hong Kong case, the fake meeting looked real enough to pass, but police later said there was no genuine interaction because the footage was pre-recorded. If a caller or video participant avoids unscripted back-and-forth, refuses independent verification, or pushes you to decide before checking, that is a serious warning sign.
How to Reduce Your Risk Before You Become a Target
For individuals, the basics still matter. The FBI says limiting public exposure of your image and voice, tightening privacy settings, and verifying identities through known numbers or separate channels can reduce what fraudsters can scrape and how easily they can impersonate you. The FBI also recommends creating a secret word or phrase with family members to verify identity during crisis calls.
For organizations, detection tools help, but they are not enough on their own. A joint information sheet led by the NSA, ASD/ACSC, CCCS, and NCSC-UK says detection will remain a cat-and-mouse game as the technology evolves and recommends a broader trust model that includes policy, education, provenance, and verification. In practice, that means not approving sensitive actions based only on what you saw or heard in a message or meeting. Use out-of-band verification, dual approval for payment changes, and hard rules around executive requests.
What to Do Immediately if You Are Targeted
If the attack is sexual extortion or intimate-image abuse, do not pay and do not send more material. eSafety says to stop all contact, preserve evidence, and report the abuse; it can work with platforms to remove content or stop threats in cases that fall within its remit.
If you are over 18 and worried about non-consensual intimate images being shared, StopNCII is a free tool that creates a hash, or digital fingerprint, from your image on your device and shares only that hash with participating companies to help detect and remove matching content. If the image shows you when you were under 18, NCMEC’s Take It Down is designed for that situation and says it can help remove or stop the sharing of nude, partially nude, or sexually explicit images or videos taken before you turned 18, without you having to upload the image itself.
For child sexual exploitation, blackmail involving explicit images, or threats to spread such material, NCMEC says victims should also file a CyberTipline report. NCMEC further notes that Take It Down can help when explicit images are real or AI-generated, and that the broader rise of generative AI is already being used in child exploitation and financial sextortion cases.
If money has already moved, speed matters. The FBI says victims should report with as much identifying and transaction information as possible, including payment method, dates, accounts, wallet addresses, and how contact began. The faster a bank, exchange, platform, or investigator sees the trail, the better the chance of interruption or recovery.
The Legal and Policy Response Is Catching Up — Slowly
Governments are starting to move, but the law is still uneven. In the UK, the government announced that creating sexually explicit deepfake images without consent would become a criminal offence, reflecting a wider recognition that fake content can cause real harm even when no original image existed. At the same time, cyber agencies are pushing technical measures like content credentials and provenance standards to help restore trust in digital media.
That matters, but it does not solve the immediate problem for victims. Right now, the most reliable protection is a mix of privacy discipline, verification habits, fast reporting, and clear response plans for families, schools, and businesses.
Summary: The Smartest Defense Is No Longer “Spot the Fake”
Deepfakes have already moved from novelty to criminal infrastructure. Around the world, agencies are documenting the same pattern: AI-generated faces, voices, and intimate images are being used to blackmail victims, imitate trusted people, trigger panic, and drive fraudulent transfers.
The hard truth is that people will keep losing if they rely only on their eyes and ears. The better rule is stricter: verify outside the message, outside the call, and outside the meeting. In the age of deepfake scams, trust is no longer something you feel. It is something you confirm.