Lessons from a trio of AI deepfake fraud schemes
Featured Article

Deepfakes complicate identity theft; here’s how fraud examiners can fight back

By Heidi J.T. Exner, J.D., CFE, CIA

Since the earliest known deepfake attack in 2019, anti-fraud professionals across the globe have witnessed the rapid development of artificial intelligence to enhance schemes involving identity theft. The author traces the evolution of AI-assisted identity theft attacks through three high-profile deepfake schemes and details methods and tools that fraud examiners must add to their cybersecurity arsenals to fight back.

The CEO of a U.K. energy company was certain he’d talked to the head of his firm’s parent company in Germany. He recognized his boss’s German accent and lilt over the phone, so when the boss urgently instructed him to transfer 220,000 euros ($243,000) to the company’s supplier in Hungary, the CEO didn’t hesitate. But what the CEO thought was a request to pay a vendor became the earliest known deepfake attack against a company. Deepfakes are synthetic media creations in which a person’s image or voice is swapped with another person’s voice or likeness. In this scheme, which occurred in 2019, fraudsters cloned the German executive’s voice with artificial intelligence (AI).

But the fraudsters who orchestrated that scheme hadn’t reached the level of sophistication of the fraudsters who conducted the 2024 deepfake attack against multinational engineering firm, Arup. In this scheme, criminals choreographed an elaborate multi-person video conference call featuring synchronized deepfakes of the company’s chief financial officer and senior staff. The video conference was so convincing that the only real employee invited to the meeting authorized transfers totaling $25.6 million. The fraudsters were able to pull off this attack without breaching Arup’s internal systems; the attack relied solely on “technology-enhanced social engineering” in which AI is used to mimic traditional social engineering tactics. It also appears that Arup’s attackers recognized critical internal control vulnerabilities within the company. The employee who authorized the transactions reportedly had power to approve large sums of money for transfer without additional oversight.

Then, in 2025, there was a series of deepfakes targeting Italy’s business elites. Fraudsters used AI-generated voice cloning to impersonate Italian defense officials, including Defense Minister Guido Crosetto, claiming they urgently needed money to pay for the release of kidnapped journalists in the Middle East. Fashion icon Giorgio Armani, Prada co-founder Patrizio Bertelli, and the former owner of football club, Inter Milan, Massimo Moretti, were among the victims of the scheme, although Moretti appeared to be the only victim to send money to scammers.

The Italian deepfake scheme represents a complex, multifaceted development in AI-powered fraud. Unlike the 2019 U.K. energy firm case that relied on simple voice cloning or the 2024 Arup case in which fraudsters exploited a company’s internal hierarchy, the Italian deepfakes displayed increased sophistication through multi-person impersonations, real-time adaptability, and intricate social engineering practices to mimic natural dialogue and exploit the patriotic sentiments of its victims.

Lessons from a trio of AI deepfake fraud schemes

In the earliest 2019 case, the scam operates as an extension of older fraud tactics, with the cloned voice used to carry out a traditional social engineering fraud scheme. The Arup case introduces a conversational dynamic to the AI-assisted fraud. AI systems begin to respond intelligently to victims, weaving real-time adaptation into prewritten scripts, much like Mozart’s strings trading motifs with the piano. In the Italian case, the scam unfolds as an autonomous dialogue among multiple AI agents, each convincing, unforeseen and self-sustaining, as they convinced targets they were calling from inside government offices in Rome.

These incidents also represent an important turning point in identity-theft attacks. In these cases, criminals didn’t rely on stolen passwords or malware to get information and steal money; instead, they relied on deepfakes for impersonation. The virtual battlefield is no longer fought by auditing code or monitoring for cybercrimes, as these frauds intermingle humanity and technology in a dangerous dance in which no one can trust a face or voice on the other end of the line.

As technology evolves, so do the methods of those seeking to exploit vulnerabilities, making real-time detection, adaptive monitoring and management plans that allow appropriate dynamic real-time risk changes to management controls, and cross-industry collaboration essential for fraud fighters and risk management professionals alike.

Digital identity theft and deepfakes in 2026

The convergence of artificial intelligence, deepfake technology, credential hijacking and cross-border fraud rings has elevated the art of identity-theft attacks.

Deepfake schemes show how identity and human emotion can be weaponized. In these cases, attackers sidestepped digital security measures and human skepticism, leading employees to override natural caution that might accompany more conventional schemes. This deception exploited both human trust and technology, showcasing the intertwined nature of current digital identity threats.

In the first half of 2025 more than 118,000 cases of identity fraud were reported in the U.K., and cases of credential leakage, the unauthorized exposure of user credentials, increased globally by more than 160% over the prior year. In the U.S., nearly 60% of businesses reported higher fraud losses last year, primarily from advanced identity-related attacks that exploited human and machine identities, which are the credentials assigned to machine-users that operate in a variety of capacities, such as AI-created agents that are assigned tasks within a company’s network. One report suggests that 10 new victims’ digital identities are compromised every second through malware phishing, “pharming,” deepfakes or sophisticated fraud-as-a-service platforms, where fraudsters sell personally identifiable information (PII) on the dark web.

Digital identity threats have traditionally focused on human accounts instead of machine identities. But now anti-fraud professionals have a broader landscape to contend with as AI agents, bots, internet of things (IoT) sensors and cloud-based applications also have credentials and access rights. Machine identities are a soft target: A whopping 1,600% increase in machine identity attacks was reported in 2024, yet the most recent (2020) data indicates that more than 60% of organizations admit they don’t secure nonhuman identities as rigorously as human accounts.

This gap allows adversaries to infiltrate automated systems, thereby gaining access to data, manipulating transactions, and even perpetrating autonomous fraud operations. Fraud examiners and risk management professionals must adapt their audit protocols and investigative techniques to keep pace with these trends. Digital forensics must account for machine and application activity, not merely human’s user logins. And it must account for the interplay of human and non-human activity. Auditors and fraud examiners, in turn, must work closely with digital forensics specialists and adapt their monitoring and response procedures accordingly.

Anatomy of the modern identity fraud operation

Today’s fraudsters prefer to “log in” with valid (but manipulated or stolen) credentials, rather than invest time in hacking technical perimeters. They rely on vast stores of breached data: 425.7 million accounts were reportedly compromised last year alone. These breaches are entry points to bank accounts, public-sector databases, insurance systems and corporate networks.

Once inside a system, fraudsters can remain undetected for months, providing ample time for them to steal data and money and set up synthetic accounts.

Lessons from a trio of AI deepfake fraud schemesGenerative AI has turned deepfakes and “liveness spoofing” videos, such as those used in the Arup scheme, from a curiosity into a serious threat. Video reenactments, in which attackers use AI to animate faces from static images, have been used to defeat biometric security controls in U.S. financial institutions and government portals. Organized criminal rings have developed techniques to reuse the same facial biometrics with different credentials, or vice versa, fooling systems that rely solely on a single biometric factor.

Synthetic identity fraud in which criminals combine real and false elements to create new, convincing digital personas, now accounts for the majority of “new account” frauds in banking, insurance and online gambling platforms. According to Cifas’ 2025 report, there was a 109% spike in identity fraud targeting gambling platforms in the U.K., with criminals exploiting weak verification protocols and even the identities of deceased persons.

Cross-session and serial attacks

Many organized fraud rings, particularly in Latin America and Southeast Asia, reuse identity elements. By deploying the same device or biometric credentials across hundreds of attempts, they exploit any system lacking real-time, cross-session detection. This makes trend analysis and behavioral anomaly detection critical tools for fraud examiners to deploy, as they allow fraud fighters to analyze user behavior patterns, thereby identifying anomalies and potential fraudulent activities in real-time.

First-party fraud

The ubiquitousness of “first-party fraud, “where individuals willingly sell or misuse their own identities for financial gain, is also a rising challenge, as reported instances of this skyrocketed from 14.6% to 35.9% of all reported digital frauds year-over-year from 2024 to 2025. These crimes are often underreported for reasons ranging from victims’ embarrassment to lack of knowledge about where to report incidents, underscoring the importance of vigilance and ethical advocacy from the fraud-fighting community.

Deepfake scams are emblematic of digital identity theft trends

  • Hybrid attacks combining human and machine identities: The attackers manipulated a human victim in conjunction with AI-generated machine identities (the deepfakes), illustrating the dual threat vectors now exploited by fraudsters.
  • Sophisticated social engineering: Traditional phishing attempts have evolved into high-tech social engineering, blending AI-generated audio-visual deception with psychological tactics to exploit trust.
  • Challenges to biometric authentication: Biometric systems reliant on facial or voice recognition alone are vulnerable to AI-enabled spoofing, calling for multimodal, continuous and dynamic verification approaches.
  • Cross-border and cross-session schemes: Cross-session, also called cross-channel fraud occurs when criminals exploit multiple channels that seem legitimate individually but form a coordinated scheme together, such as opening an account online, changing details by phone, and withdrawing funds through a mobile app.

To combat advanced digital attacks, fraud examiners must consider:

  • Multifactor and multimodal authentication: Moving beyond single-factor biometrics, combining behavioral analytics, device fingerprinting, location checks and continuous authentication can dramatically improve resistance to deepfake-enabled breaches.
  • Real-time and adaptive threat monitoring: Implementing AI-driven monitoring that detects anomalous logins and transactions is critical because it enables continuous, real-time detection of subtle, evolving threats, reduces false positives, supports rapid incident response, protects sensitive assets, and strengthens overall organizational security and compliance posture.
  • Employee awareness and training: Increasing frontline employee education about emerging identity fraud tactics, including deepfakes and synthetic identities, empowers personnel to maintain vigilance and question unexpected digital requests.
  • Incident response preparedness: Well-defined plans emphasizing rapid detection, containment, cross-department coordination, and cooperation with law enforcement to expedite recovery and mitigate losses.
  • Cross-industry intelligence sharing: Fraud examiners, auditors, risk management professionals, cybersecurity specialists and regulatory bodies must collaborate internationally, sharing threat intelligence and best practices to stay ahead of evolving fraud schemes.

Organizational defenses that anti-fraud professionals must champion

Multifactor authentication and dynamic access controls

Employing multifactor authentication for systems is no longer optional, especially for sensitive roles and privileged accounts. Fraud examiners and qualified auditors should work collaboratively to audit for compliance and advocate for dynamic risk-based gating, such as device fingerprinting and location analysis, that limits exposure even when credentials are compromised.

Adaptive identity and threat monitoring

Continuous monitoring of both human and machine identities is essential. This includes:

  • Credential-exposure alerting: Leverage services that scan for leaked credentials and compromised keys on the dark web.
  • Decoy (“canary” or “honey pot”) accounts: These are fake user accounts or identities that have no legitimate business use; they exist solely to attract and detect attackers who are probing, stealing credentials, or committing fraud. Deploying these access points may lure and flag attackers in real time.
  • AI-driven behavioral analytics: Implement solutions based on normal user and device behavior, flagging anomalies such as unusual login times and geographic variance where users log in from various locations, or cross-system pivoting, which indicates that a fraudster may be using one compromised system as a stepping stone to move into other systems; “cross‑system pivoting” is that same idea applied to user behavior patterns rather than just network host.

Data minimization and zero-trust architectures

Organizations can reduce the radius of a data breach by minimizing the access and retention of personally identifiable information and by rigorously segmenting what remains. Data segmenting is the practice of grouping data into distinct subsets based on characteristics like type, sensitivity, usage, or who should be allowed to access it, then applying different controls and rules to each subset. If an attacker or insider reaches one segment, they don’t automatically acquire access to everything else — especially the most sensitive information. It supports zero‑trust approaches by forcing you to define who needs access to what, when and how, instead of treating all internal data as equally accessible. Zero-trust frameworks that require continual re-authentication decimate the effectiveness of one-and-done credential theft attempts.

Lessons from a trio of AI deepfake fraud schemes

The human and technical interface: Challenges to biometric authentication

Both the Italian and Arup cases highlight how AI-enabled identity fraud uniquely blends human trust and emerging technologies. Biometric systems that rely solely on face or voice recognition are especially vulnerable to deepfake attacks, as attackers can bypass physical security with digitally fabricated identities. This underscores the need for multimodal authentication that combines biometrics with behavioral analytics, device fingerprinting, geolocation and continuous authentication to establish trust dynamically. Examples of robust use cases for multi-modal include:

Example 1: Banking app session

A mobile banking app starts with fingerprint scan (biometric) for initial login, then continuously monitors keystroke dynamics and swipe patterns (behavioral analytics), cross-checks the device's unique fingerprint (browser/OS/hardware hash), confirms geolocation matches the user's usual banking area, and watches mouse movements or typing speed. If any signal drifts (e.g., unusual location or typing style), it prompts re-verification or locks the session.

Example 2: Enterprise SSO dashboard

During corporate single sign-on, facial recognition (biometric) confirms the user, while behavioral analytics tracks mouse trajectories and keystroke rhythm, device fingerprinting verifies the trusted laptop's specs and IP reputation, geolocation ensures office proximity, and continuous checks on screen taps or app usage patterns dynamically score trust —dropping it triggers step-up authorization like a voice challenge if an anomaly appears.

Example 3: E-commerce checkout flow

For high-value online purchases, iris or face scan (biometric) pairs with behavioral analytics on typing cadence and mouse pressure, device fingerprinting (screen resolution, plugins), geolocation tying to billing address history, and ongoing session monitoring of touch gestures: if the trust score falls due to mismatched location or erratic behavior, it seamlessly requires a secondary biometric like voice before completing the transaction. Employee training plays an equally vital role. Even the most advanced technology can be undermined by targeted social engineering exploiting natural human trust in familiar voices or faces. Increasing frontline staff awareness of emerging threats like deepfakes, synthetic identities, and exploit-as-a-service platforms is essential to strengthen organizational resilience. The Arup case is an excellent example of how employee training may have prevented significant corporate loss.

Even with technical defenses, human error remains a top risk vector. To combat this, fraud fighters should insist on:

  • Annual anti-phishing and social engineering training for all staff.
  • Executive security briefings to address deepfake risks and tailored threats.
  • Phishing simulations to maintain preparedness and refine responses.

Incident response readiness

Anti-fraud professionals charged with ensuring that their organizations maintain and regularly test incident response plans must cover rapid detection, containment notification, and legal/regulatory reporting for human and machine identity compromises.

Lessons from a trio of AI deepfake fraud schemes

Expanding tools and tactics: The fraud examiner’s arsenal

Combating deepfakes and synthesized attacks

Modern fraud prevention employs multifactor, multi-modal verification, which combines biometrics, behavioral signals, and contextual device data. For the Arup fraud, voice biometrics on the video call platform would analyze vocal traits (pitch, timbre, cadence) against the real CFO's enrolled profile, rejecting the deepfake audio instantly even if visuals looked convincing. Modern systems catch over 90% of clones within seconds of speech. Facial recognition could cross-check live video feeds for liveness, such as eye blinks or micro-expressions, mismatched in deepfakes, prompting a secondary live selfie or halting the session before transfer approval.

Behavioral analytics would monitor the employee’s interaction during and after a call, and detect unusual keystroke dynamics or mouse patterns while entering transfer details (e.g., hesitation, and copying and pasting from an untrusted source) that deviate from baseline use, combined with atypical urgency in session flow, would drop the risk score and require step-up verification like a real-time push to the CFO’s device.

Device fingerprinting of the video platform/browser would detect anomalies like a new IP/location not matching the U.K. CFO’s (e.g., attacker-controlled server), while geolocation flags Hong Kong-based access for a supposed U.K. executive; continuous authorization during transfers would tie employees’ device history, network patterns, and transaction velocity to flag it as high-risk, enforcing dual approval or a delay.

The system assigns a live risk score fusing all signals: If deepfake audio fails voice biometrics, geolocation mismatches, and behavior spikes risk, trust drops below threshold, blocking transfers outright or routing to a human overseer with alerts; this zero-trust loop could’ve thwarted the scam as no single layer was checked in Arup's manual process.

Advanced “liveness” detection, cross-session device tracking and continuous biometric analysis are supplanting legacy single-point controls, and these tactics may have gone a long way to prevent the major deepfake scams discussed in this article. Liveness detection technology uses challenges, such as head turns, blinks, or randomized prompts. When used with AI to detect static or synthetic media, in Arup, video platform liveness would have likely failed the deepfake faces for lacking micro‑movements or depth. In the Italian voice scheme, as liveness detection requires live speech responses, it would’ve rejected cloned audio instantly. Modern systems verify physiological signals, which are absent in AI generation.

Real-time cross-border collaboration

Organized fraud rings are experts of exploiting jurisdictional silos. Fraud fighters should promote real-time threat-intelligence sharing internally, across sectors, and with law enforcement, to rapidly identify and respond to emerging patterns. For example, active participation in industry roundtables and government task forces amplifies detection strength and deepens fraud examiners’ professional networks for support when attacks occur.

First-party fraud: Ethical action

Fraud prevention for vulnerable groups, like children, requires collaboration with consumer protection agencies and parents. Anti-fraud professionals must promote policies for monitoring “dormant” credit reports, early age verification and swift responses to suspected identity misuse. In first-party fraud, a fraud examiner’s ethical stance and ability to distinguish between criminal intent and economic desperation are vital. This requires empathy, legal clarity, and, where possible, restorative solutions.

The schemes discussed in this article targeted corporate finance professionals, not children or vulnerable consumers; they were third-party criminal deepfake impersonations of leaders, exploiting urgency and visual/audio trust, not identity misuse or economic desperation. However, post-scam, Arup worked with the Hong Kong police to recover some funds; the Italian cases led to police recovery policy promotion, which in turn could standardize deepfake reporting.

Trends every fraud fighter can track

The following trends are ones that every fraud fighter must know about:

  • Industrialization of fraud: The mass production of scams using automation, AI, and global crime networks, such as fraud-as-a-service platforms, deepfake tools, and data pipelines, enable high-volume, repeatable attacks at unprecedented scale, far beyond lone actors. In this sense, AI can act as both a threat and a defense. While it lowers criminals’ costs and increases attack scale, it also powers more advanced anomaly detection.
  • Speed of compromise: The average “dwell time” (intrusion to detection)is long with automated credential attacks. This article proposes that real-time detection and response will become the industry “gold standard.”
  • Consumer expectations: A business imperative is balancing user experience with robust, behind-the-scenes verification. Formulating secure but seamless onboarding, especially in remote and digital-first environments, is as much an art as a science. As with all business processes, this is auditable and can be formulated with risk-informed input and monitored internally or externally via annual reviews.
  • Machine identity security: Every AI agent, business application and bot is a potential target for large-scale compromise.

Lessons from a trio of AI deepfake fraud schemes
Source by Heidi J. T. Exner, J.D., MBA, CFE, CIA, CRC, PI

 

The human element in fighting digital fraud

The trio of incidents in this article reminds of the delicate balance between trust and skepticism in daily operations. The seamless blend of human psychology and high technology, comprised of various instruments, is both fascinating and daunting. It shows that even the smartest organizations can be undone by the intersection of sophisticated AI tools and human vulnerabilities. All professionals—not just those who work in technology — have a responsibility to cultivate an organizational culture that questions and verifies, especially when large sums of money and reputations are at stake. Fraud fighting is as much about people as it is about protocols and technology, demanding empathy, vigilance and innovation. The evolving nature of digital identity theft requires anti-fraud professionals to be adaptable, proactive and collaborative. Multiple perspectives, such as those in cybersecurity, digital forensics, auditing and business process management, and fraud investigators, are crucial to fighting digital fraud, regardless of skill level.

Success in defending against these hybrid threats depends on layered defenses, continuous monitoring, data-driven intelligence, and cross-industry coordination. With fraudsters industrializing their operations, members of the anti-fraud community must match fraudsters’ sophistication through leadership, innovation and unwavering ethical standards. The future of digital trust depends on the collective expertise and vigilance of fraud-fighting professionals.

For Arup, the financial damage was severe. The company lost $25 million and experienced significant operational disruption. Although the incident prompted a broader corporate and industry reckoning regarding heightened security controls, it was only months later that the fraudsters struck again with the Italian defense minister scheme.

Digital identity theft targets both humans and machines with relentless frequency. Recent cases typify the industrialization of fraud, where AI and fraud-as-a-service platforms allow criminals to scale attacks, compromise new victims every few minutes and constantly innovate their artform.

Deepfakes, synthetic identities, and cross-session exploits have increasingly become central to digital identity attacks, challenging fraud examiners to develop dynamic detection techniques and to champion strict, adaptive security frameworks. This trio of deepfake scams illustrates how digital identity theft has intensified in sophistication, scale and monetary impact, in just a few short years.

More than ever, defending against identity theft attacks requires agility, technology integration and ethical leadership to uphold trust and safeguard assets in an environment where fraudsters industrialize their attacks with alarming efficiency.

Heidi J. T. Exner, J.D., CFE, CIA, is a founding partner of Ethical Edge PI and Corporate Advisors, Inc. Contact her at hexner@ethicaledgeadvisors.com.

Begin Your Free 30-Day Trial

Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.