The use of real-time deepfakes, where fraudsters use video or audio generated with artificial intelligence to replicate someone’s voice, image and movements as the scheme is happening, is the latest way that fraudsters are perpetrating a host of frauds.
In this article, the authors describe real-time deepfake schemes and what can be done to combat them.
In February, an employee of a multinational company thought he was logging into a virtual meeting with his organization’s chief financial officer (CFO) and several of his co-workers. At first, he’d been skeptical of the meeting. The initial message he
received about it seemed like a phishing email since it mentioned a highly important transaction that needed to be carried out in secrecy. He set his fears aside once the meeting started; everyone else on the call appeared to be people he’d seen before.
But nobody else on the conference call was an actual person; they were all elaborate real-time deepfake video recreations in which fraudsters used artificial-intelligence (AI) to replicate the voices, images and movements of people as the scam happened.
During the call, the fake CFO instructed the employee to transfer $25 million to multiple Hong Kong banks and multiple accounts belonging to the criminals. The employee was in Hong Kong but the fabricated video-meeting participants were ostensibly
in London. It wasn’t until later when the employee checked with the corporation’s head office that he learned he’d been the victim of a scam. (See “Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’,”
by Heather Chen and Kathleen Magramo, CNN, Feb. 4, 2024.)
In April, fraudsters used a real-time deepfake video of Tesla CEO Elon Musk to defraud a South Korean woman out of INR 40 lakhs (approximately $50,000). In this classic romance scheme with a high-tech twist, the victim believed Elon Musk had added her
as a friend on Instagram. In a subsequent deepfake video, he told her he loved her and convinced her to deposit the money in a South Korean bank account with promises that she’d get rich. (See “South Korean woman loses £40k in Elon Musk romance scam
involving deepfake video,” by Shweta Sharma, Independent, April 24, 2024 and “Drake’s fake Tupac, a $50,000 Elon Musk romance scam, and AI-generated racist tirades: Deepfakes are terrorizing society,”
by Jasmine Li, Fortune, April 29, 2024.)
These are just a few examples of frauds committed with real-time deepfakes — the latest way that fraudsters are perpetrating schemes with AI technology. Deepfake is an umbrella term often used in news media to encompass all sorts of AI-generated schemes,
including those carried out with prerecorded deepfakes and those occurring in real time. In this article, we focus on the current crop of deepfake schemes — real-time deepfakes, which are generated as the scheme occurs. (See “Real-time deepfakes are a
dangerous new threat. How to protect yourself,” by Jon Healey, Los Angeles Times, May 11, 2023.)
Because of the ever-advancing technology of AI and machine learning (ML), fraudsters can have live interactions with their victims to impersonate business executives to authorize transactions; impersonate family members in need of help in “grandparent”
scams; portray public figures conveying misinformation; and deceive people for their money in romance scams. Real-time deepfakes present a unique challenge for organizations and individuals alike. How do we fight back against something so convincing
that we don’t even know we’re being defrauded? In this article, we’ll examine the dangers that real-time deepfakes pose to organizations and individuals, and what’s being done to address a fraud that’s literally too good to be true.
Fraud detection in the digital age
Long before the internet was used for processing financial transactions, fraud examination techniques included a review of hard-copy source documents, such as invoices, purchase orders, checks and bank statements. An early training session from the Association
of Certified Fraud Examiners (ACFE) illustrated the value of these procedures quite clearly. The trainer described how ACFE founder and Chairman Dr. Joseph T. Wells, CFE, CPA, and James Ratley, CFE, president emeritus of the ACFE, solved an embezzlement
case. During the investigation, they noticed a few paper checks that had been folded and filed neatly in a drawer among hundreds of others.
They soon learned that a purchasing agent had instructed the accounts payable clerk to give him the paper checks for a certain vendor. He then folded each check in half and placed it in his shirt pocket until he went to his bank and deposited it. The
folded checks were returned with the bank statement and placed in the drawer with all the others, but the telltale folds gave them away.
Such hard-copy clues for fraud examiners don’t exist in the digital age, and as e-commerce exploded in the 1990s with the arrival of Amazon and PayPal, fraudsters jumped at the chance to perpetrate their schemes electronically and from afar. One of the
first frauds of the digital era was the “Nigerian prince” scam, which used email to ask the victim for a large amount of money to help a member of Nigerian royalty who’s been wronged in some way. It’s an update to an old scam from the 19th century
when fraudsters corresponded by mail with their victims, asking for money to get a Spanish prince out of prison. In 2019, one of the earliest reported cases of a deepfake occurred when fraudsters used AI-generated audio in a phone call to convince
an executive of a U.K.-based energy company to transfer 220,000 euros (approximately $240,000) to a Hungarian supplier. (See “The Evolution of Scams: A Brief History,” by Brittani Johnson,
Iris, April 27, 2023; “The Prince is
Back and He Still Needs Your Help Moving Some Money, Old Scams are New Scams,” by Jeff Laughlin, Southern Illinois University, Edwardsville, IT Spotlight, March 15, 2021; and “A Voice Deepfake Was Used to Scam a CEO Out of $243,000,”
by Jesse Damiani, Forbes, Sept. 3, 2019.)
During a December 2022, ACFE training session in Austin, Texas, Zachary Kelley, co-author of this article, demonstrated just how easy it would be for fraudsters to make their deepfakes a reality by showing a real-time, AI-generated video of host Simon
Cowell singing on the television show “America’s Got Talent.” The video, created by AI company, Metaphysic, has Cowell singing on a screen behind contestant Daniel Emmett. Cowell was watching this video play as he sat at the judges’ table. Cowell
wasn’t actually singing; the voice was Emmett’s. (See “Simon Cowell Sings on Stage?! Metaphysic Will Leave You Speechless,” America’s Got Talent, June 2, 2022.) AI-generated deepfakes made
with technology like that from Metaphysic provide cyber criminals with sophisticated tools that can do more than impersonate TV gameshow hosts.
They could use it for nefarious activities such as thwarting security systems, including biometrics. Although there have yet to be public reports of deepfakes used to get around biometric security systems (biometrics refers to the use of physical characteristics
like facial recognition or fingerprints), experts believe it’s only a matter of time. Technology research consulting firm Gartner, Inc., issued a warning in February that by 2026, attacks using AI-generated deepfakes on face biometrics “will mean
that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation.” (See “Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable
in Isolation Due to AI-Generated Deepfakes by 2026,” Feb. 1, 2024.)
Last year, Haywood Talcove, chief executive of LexisNexis Risk Solutions’ Government Group, told the Los Angeles Times that the newest AI technology could circumvent security techniques that companies use in place of passwords. He used California’s two-step
online identification process in which people must upload a picture of their driver’s license or identification card and a selfie. Fraudsters can easily buy a fake ID online then use deepfake technology to generate a matching selfie. Talcove also
said that he’d be nervous if his bank started using his voice for a password. “Just using voice alone, it doesn’t work anymore.” (See “Real-time deepfakes are a
dangerous new threat. How to protect yourself.”)
Generative artificial intelligence (GAI), a term for AI systems capable of creating text, images, video, audio, code and other media in response to queries, is a game changer for organizations and corporations worldwide. They must now evaluate the ease
of access that criminals have to their information and review the audio and video recordings on their social media accounts and websites. Individuals should consider whether a business (such as a bank) uses voice recognition for its security system
because that means it will have a recording of their voice. Banks and other businesses may store customers’ voice or image files in a database, where an employee or hacker could access and sell them, such as in the case of the Outabox hack. The Australian
firm had implemented facial recognition software in dozens of bars and clubs. In late April, reportedly more than 1 million customers had portions of their personally identifiable information (PII), including their names, addresses and driver’s licenses,
stolen and published online. (See “The Breach of a Face Recognition Firm Reveals a Hidden Danger of Biometrics,” by Jordan Pearson, Wired, May 2, 2024.)
Other (mis)uses of deepfakes
The word “deepfake” first appeared in late 2017 on Reddit as the username of someone who “… shared (nonconsensual) pornographic videos that used open-source face-swapping technology.” (See “Deepfakes, explained,”
by Meredith Somers, MIT Sloan School of Management, July 21, 2020.) The images were prerecorded, not real time. An AI algorithm placed celebrities’ faces into authentic porn videos.
Pornography is one of the most frequent uses of prerecorded deepfake videos with approximately 99% of the victims being women. Victims include pop music icon Taylor Swift, whose deepfake image received more than 45 million views earlier this year on X,
formerly Twitter, before the social media site took it down. By using just one clear face image, it takes less than 25 minutes and costs almost nothing for a scammer to create a prerecorded 60-second deepfake pornographic video of anyone. (See “Generative AI fueling spread of deepfake pornography across the internet,” by Luke Hurst, EuroNews, Oct. 20, 2023 and “Why the Taylor Swift AI Scandal is Pushing Lawmakers to Address Pornographic Deepfakes,” by Susie Ruiz-Lichter, National Law Review, April 24.)
These hyper-realistic videos can depict events that never occurred, and GAI makes it difficult to distinguish fact from fiction. Prerecorded sham celebrity endorsement videos have portrayed CBS News host Gayle King and actor Tom Hanks as spokespersons
for a weight-loss product and a dental plan, respectively. (See “Celebrities warn followers not to be duped by AI deepfakes,” by Anumita Kaur, The Washington Post, Oct. 3, 2023.) In January,
approximately 5,000 New Hampshire residents received a prerecorded deepfake robocall that impersonated U.S. President Joe Biden instructing them not to vote in the presidential primary. (See “Democratic operative admits to commissioning Biden AI robocall in New Hampshire,”
by Pranshu Verma and Meryl Kornfield, The Washington Post, Feb. 26, 2024.) Debunking deepfakes after the fact can be too little too late, as viewers of the videos or recipients of the robocalls may continue to believe the misinformation.
These hyper-realistic videos can depict events that never occurred, and GAI makes it difficult to distinguish fact from fiction. Celebrities, politicians and journalists are among the many victims of deepfakes.
Bad actors are increasingly using real-time deepfake audio to carry out “grandparent scams.” In Germany, such scams originated in the 1990s with scammers calling an unsuspecting grandparent and impersonating a grandchild (or other loved one) urgently
needing money for an emergency. [See “
Duping Oma: what to know about the ‘Enkeltrick’ scam in Germany,” by Shelley Pascual, The Local Germany, June 26, 2018.] AI-generated voice recordings
produced from the actual voice of the caller lend credibility to these scams. Last year, an 86-year-old Florida woman lost $16,000 to scammers who used a deepfake to impersonate her grandson, supposedly in jail and needing bond money. (See “
Federal Trade Commission warns AI voice cloning
is enhancing ‘grandparent scam’,” by Jessica Bruno, WPTV, March 23, 2023.)
Last year, the U.S. Federal Trade Commission (FTC) issued a warning about such scams, urging potential victims to contact their loved ones using a known phone number to verify or disprove the request. The FTC also warned that deepfake audio scammers may
ask their targets to “wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs.” (See “Scammers use AI to enhance their family emergency schemes,” by Alvaro
Puig, FTC, March 20, 2023.) Readily available, affordable cloning software that fraudsters use to obtain an audio clip of anyone’s voice makes convincing audio possible. In 2023, Microsoft launched VALL-E, an AI language model that can simulate a
person’s voice with only three seconds of audio. [See “VALL-E (X),” Microsoft.]
Inadequate legislation
Governments around the world have tried to address the use of deepfake technology with legislation and regulation, but many of these provisions might be insufficient to keep up with advancing technology. Many have been largely aimed at prerecorded deepfakes
instead of real-time ones. Recent legislative initiatives include:
- European Commission Artificial Intelligence Act (approved in March). Described as the “first-ever legal framework on AI,” it addresses the risks of AI and positions Europe to set a global standard in the ethical and sustainable development
of AI technologies. It aims to ensure that AI systems respect fundamental rights and are safe for use. (See “Shaping Europe’s digital future,” European Commission.) The law applies to
every company providing AI technologies within the European Union.
- U.K.’s Online Safety Act (criminal offenses effective January 2024). It imposes duties that “require providers of services regulated by the Act to identify, mitigate, and manage the risks of harm … from: (i) illegal content and activity,
and (ii) content and activity that is harmful to children, and confers new functions and powers on the government regulatory agency, OFCOM (the Office of Communications).” [See “Online Safety Act 2023,”
legislation.gov.uk.]
- Chinese government (2023). The legislation prohibits the creation of deepfakes without user consent and requires disclosure when using AI to generate content. The law applies only when the content is distributed on the internet. (See
“China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?,” by Asha Hemrajani, The Diplomat, March 8, 2023.)
- U.S. Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, was introduced earlier this year in the U.S. House and Senate. It allows victims to sue deepfake creators if they knew or “recklessly disregarded” the victim’s
lack of consent. But this is a civil court remedy, and there’s no provision for criminal prosecution. About 14 U.S. states have enacted laws against nonconsensual pornographic deepfakes and 10 have laws limiting the use of deepfakes in political
campaigns. (See “How a New Bill Could Protect Against Deepfakes,” by Solcyré Burga, TIME, Jan. 31 and “More and More States Are Enacting Laws Addressing AI Deepfakes,” by Bill Kramer,
MultiState, April 5.)
- U.S. Federal Trade Commission’s Impersonation Rule. A proposed amendment to the FTC’s Impersonation Rule prohibits impersonation of individuals in addition to governments and businesses and extends liability to parties who provide
goods and services with knowledge or reason to know those items would be used for illegal impersonation. (See “FTC Proposes New Protections to Combat AI Impersonation of Individuals,”
FTC, Feb. 15, 2024.)
Governments around the globe are wrestling with the legislative provisions needed to protect citizens against deepfakes and to punish the perpetrators. It’s a complex issue that requires strong legislative action, ongoing monitoring and updates as authorities
learn what’s effective and as technological developments morph into new tools for criminals.
Examining the adequacy of current laws and regulations to address the legal implications posed by deepfakes is a key issue in an ever-evolving technology landscape. The data security and privacy concerns that arise from deepfakes mean that individuals,
businesses and governments must work together to develop and implement multifaceted strategies. Mitigating the inherent risks of emerging technologies and protecting entities from AI-generated fraud takes a multipronged approach that includes detection
tools, strong legal frameworks and robust authentication measures.
Combating real-time, AI-generated deepfakes
Only a limited number of strategies for combating fraud threats from real-time, AI-generated deepfakes are likely to be effective and differ from those suggested for pre-recorded deepfakes. Checklists that include looking for mismatches between a deepfake
person’s image and the accompanying voice, or inaccurate lighting and shadows are no longer applicable for AI-generated, real-time deepfakes.
Management must make sure that employees (particularly in financial service roles) are aware of real-time deepfakes, the risks they pose and how criminals use them. Organizations should establish strong protocols to prevent, identify and mitigate deepfakes.
An oft-repeated phrase in auditing is, “Trust but verify,” but with deepfakes, we should consider the missive from Sam Antar, convicted fraudster of the infamous Crazy Eddie securities fraud: “Don’t trust; just verify.” (See “Sam Antar: The CFO behind
the Crazy Eddie fraud,” by Quentin Fottrell, The Wall Street Journal, July 29, 2014.) To verify the authenticity of what could be real-time deepfake video and audio, employers and fraud examiners need to ensure that their employees and clients
(and themselves) obtain training in this area. According to the U.S. Department of Homeland Security, the following strategies are necessary to combat and mitigate the harm from real-time deepfakes:
- Establish policies and legislation that allow organizations to scrutinize media and act when necessary.
- Develop capabilities to identify deepfakes and demonstrate the authenticity of media.
- Create rules for how to act when a deepfake is discovered.
- Foster an environment where truth and authenticity are promoted and deepfakes aren’t tolerated. (See “Increasing Threat of Deepfake Identities,” Department of Homeland Security, 2021 and “Deepfake, Phase 2 – Mitigation Measures,” Department of Homeland Security, 2022.)
But in lieu of detailed policies and procedures for fighting and mitigating real-time deepfakes, the best advice might be the most simplistic: Employ a “zero-trust” attitude. If a late-night phone call from a family member in distress seems suspicious,
hang up and call them back using a number you know to be real. Trust your gut. As scammers continue to use more sophisticated technology to carry out their real-time deepfake schemes, anti-fraud professionals will need to meet the challenge with even
more advanced technology.
Digital watermarks or other data and images could be embedded into video when it’s produced, providing a method for authentication. Digital watermarks add pixel or audio patterns that are detectable by computer but imperceptible to humans. The patterns
disappear in any modified areas, enabling the owner to prove that the media is an altered version of the original; but it’s not yet a foolproof method and could be cost prohibitive for real-time video. (See “Why Watermarking Is Just One Part of Combating
Deepfakes,” by Nil Shah, Variety, March 21, 2024 and “Does Watermarking Protect Against Deepfake Attacks?” by Nick Gaubitch, Pindrop, Oct. 20, 2023.)
The U.S. government is encouraging innovation to solve complex consumer-protection problems, such as identifying deepfakes. The FTC recently announced four winners of the Voice Cloning Challenge (now in its sixth year under the America COMPETES Act):
- AI Detect by Omni Speech distinguishes the subtle differences between genuine and synthetic voice patterns using AI algorithms.
- DeFake, by Ning Zhang of Washington University in St. Louis, incorporates a form of watermarking by inserting distortions into audio recordings on social media and other platforms to make it more difficult to accurately clone a voice.
- OriginStory authenticates the original voice recordings upon creation and embeds a type of watermark into the audio stream.
- Voice Cloning Detection technology by Pindrop Security identifies voice clones and audio deepfakes in real time by examining incoming calls in two-second chunks.
The FTC emphasizes that no one solution will completely solve the problem of deepfakes. The agency supports a multidisciplinary approach to preventing the harm posed by voice cloning. (See “FTC Announces Winners of Voice Cloning Challenge,”
FTC, April 8, 2024.) The World Economic Forum has a similar perspective, noting that mitigation for deepfakes has “no silver bullet.” (See “4 ways to future-proof against deepfakes in 2024 and beyond,”
by Anna Maria Collard, World Economic Forum, Feb. 12, 2024.)
To disseminate information and to build resilience against deepfakes, members of public and private consortia are working together, including the Coalition for Content Provenance and Authenticity, which is developing technical standards for certifying
the source and history of media content. A coalition of publishing and technology members (BBC, CBC/Radio Canada, IPTC, Media City Bergen, Microsoft and The New York Times), dubbed Project Origin, is undertaking similar efforts, working to foster
confidence in digital news by tracking content from creation to distribution, demonstrating its integrity. Joint efforts from private and public entities are necessary to combat the threats that deepfakes pose not just on a personal level, but to
governments and society.
[See “Reporting deepfakes” at the end of this article.]
Carolyn Conn, Ph.D., CFE, CPA, is a clinical associate professor in the Department of Accounting at Texas State University in San Marcos, Texas. Contact her at cc31@txstate.edu.
Zachary M. Kelley is a lecturer in the Department of Information Systems and Analytics at Texas State University in San Marcos, Texas. Contact him at zachkelley@txstate.edu.
Suggested readings and resources:
Deepfake definitions and background:
Deepfake technology, impacts on crime and
law enforcement, and mitigation:
- “A new approach to fighting fraud while enhancing customer experience,” McKinsey & Company, Nov. 8, 2022.
- “Facing reality? Law enforcement and the challenge of deepfakes, an observatory report,” Europol Innovation Lab, 2022.
- “The People Onscreen Are Fake. The Disinformation is Real,” by Adam Satariano and Paul Mozur, The New York Times, Feb. 7, 2023.
Deepfake examples (pre-recorded):
- “A Short History of Deepfakes,” by David Song, Medium, Sept. 23, 2019.
- “Man arrested in Sydney’s west over clubs data breach,” by Max Mason, David Marin-Guzman and Zoe Samios, Financial Review, May 2, 2024.
- “Third-party providers a customer data ‘weak spot’, Australian privacy commissioner says,” by Josh Taylor, The Guardian, May 6, 2024.
- “When seeing is no longer believing,” by Donie O’Sullivan, CNN.
Legislation
and regulation:
- “A Look at Global Deepfake Regulation Approaches,” by Amanda Lawson, Responsible Artificial Intelligence Institute, April 24, 2023.
- “Deepfake laws: is AI outpacing legislation?” by Aled Owen, Onfido, an Entrust Company, Feb. 2, 2024.
- “EU AI Act: first regulation on artificial intelligence,” European Parliament, Aug. 6, 2023.
- “FTC Announces Impersonation Rule Goes into Effect Today,” U.S. Federal Trade Commission press release, April 1, 2024.
- “The FTC is trying to help victims of impersonation scams get their money back,” by Wes Davis, The Verge, April 1, 2024.
- “Deceptive Audio or Visual Media (‘Deepfakes’) 2024 Legislation,” by National Conference on State Legislatures,
updated May 7, 2024.
If you believe you’ve been the victim of a deepfake scam or were able to avert an attempt, report it to law enforcement. Authorities need to add incidents to their databases to compile a complete picture of deepfakes with the goal of identifying and, hopefully, stopping the perpetrators. In the U.S., report online via ReportFraud.ftc.gov to the Federal Trade Commission (FTC). In the European Union, victims of fraud can report it via Victim Support Europe, an umbrella organization working across multiple countries. Reporting deepfake fraud is essential so that law enforcement and government agencies, along with private companies, can coordinate their efforts to prevent and detect such frauds and to identify and punish the perpetrators.