Generative artificial fraud
Innovation Update

Generative artificial intelligence is now the fraud perpetrator

By Zachary M. Kelley, Carolyn Conn, Ph.D., CFE, CPA
Please sign in to save this to your favorites.

Fraud examiners have historically relied on reviewing documents to identify fabricated invoices, altered records and falsified financial statements. But now fraudsters are using generative artificial intelligence to autonomously produce near-perfect fraudulent artifacts. GAI is now the fraud perpetrator. Here are ways to detect and deter it.

Infamous fraudster Barry Minkow, the former CEO of ZZZZ Best, once described his chief financial officer (CFO) as a “genius with the copy machine.” During a 1990s jailhouse interview with the Association of Certified Fraud Examiners (ACFE), Minkow recounted the many methods they used to deceive auditors and successfully commit a multimillion-dollar financial statement fraud scheme. Estimates were that the CFO used the office copier to create more than $40 million in fictitious invoices. (The interview was recorded as part of the ACFE's Cooking the Books self-study course published in 1991.)

If Minkow and his CFO were still in the fraud business, they’d likely ditch the copy machine and begin using generative artificial intelligence (GAI) to create fake, totally original documents that are significantly more difficult to detect and that can be created at a much faster rate.

Criminals adopting technology

Fraud examiners have historically relied on reviewing physical and electronic documents — manually and via analysis software — to identify fabricated invoices, altered records and falsified financial statements. Common red flags have included irregular invoice numbering, identical vendor and employee home addresses, and obvious production flaws in identification cards. But now the “tried and true” analytical techniques are increasingly inadequate. Following the public release of ChatGPT in November 2022, users — including fraud perpetrators —  have been rapidly adopting GAI models across personal and professional domains.

Generative artificial fraud

Fraudsters adopting new technology, of course, isn’t novel. In the 1990s, the introduction of commercially available photo editing and graphic design software significantly reduced the expertise required to produce convincing counterfeit passports and other forms of identification. In testimony before the U.S. House Select Committee on Homeland Security, John S. Pistole, then assistant director of the FBI’s Counterterrorism Division, warned in 2003 that “… the skill and time needed to produce high-quality counterfeit documents has been reduced to the point that nearly anyone can become an expert. Criminals and terrorists are now using the same multimedia software used by professional graphic artists.” He emphasized that such documents facilitated a variety of crimes, including bank fraud, credit card fraud, wire fraud, money laundering and fugitive concealment.

The same dynamic now applies to GAI. Fraud examiners welcome these technologies for their potential to improve detection and analysis while recognizing the inevitability of their misuse by criminals. What distinguishes the current environment is that generative models are no longer merely tools that assist human fraudsters. They now can independently produce fraudulent artifacts at scale. In other words, GAI has become a fraud perpetrator. 

Generative artificial fraud perpetrators (GAFP)

Early criminal applications of artificial intelligence focused primarily on efficiency, including improving phishing campaigns, automating social engineering, and accelerating other tried-and-true fraud schemes. GAI expands this capability by enabling the creation of synthetic identities and documents that closely resemble legitimate records.

Generative artificial fraud

In the U.S. alone, losses attributed to synthetic identity fraud were estimated at approximately $35 billion in 2023, according to the Federal Reserve Bank of Boston. These losses increasingly stem from machine-generated artifacts rather than manually fabricated documents.

This shift requires a reframing of how we conceptualize and fight fraud. Rather than focusing exclusively on identifying people, fraud examiners must also identify the systems that generate fraudulent materials. We propose the term generative artificial fraud perpetrators (GAFPs) as the models that autonomously produce fraudulent documents, identities or transaction artifacts with minimal human involvement beyond simple input of prompts.

GANs and synthetic deception

The development of GAFPs is enabled in part by generative adversarial networks (GANS), first introduced by Ian Goodfellow and his colleagues in 2014. GANs consist of two neural networks — a generator (G) and a discriminator (D) that evaluate whether data appears authentic.

Through iterative competition, G improves its ability to replicate authentic artifacts, progressively reducing the discriminator’s classification advantage. For the mathematicians reading this column, the interaction can be modeled as a “minimax” optimization problem in which the two networks optimize this objective function:

Generative artificial fraud
The minimax optimization problem

 

Within this framework, the generator seeks to minimize the probability that the discriminator correctly identifies its outputs as synthetic. The discriminator seeks to maximize its accuracy in distinguishing samples drawn from the true data distribution Pdata from those produced by the generator.

This adversarial process creates a sustained “arms race” between generation and detection. As the discriminator improves, the generator produces increasingly realistic outputs. Over time, the generator’s artifacts may become difficult to distinguish from authentic records using traditional fraud detection techniques and controls.

After being trained as part of a GAN, the generator can be deployed to produce fraudulent documents to scale. Fraudsters no longer need to manually forge individual invoices or pay stubs but can spawn large volumes of synthetic documents with subtle random variations. These artifacts replicate pixel-level textures, formatting and numerical distributions that traditional fraud detection systems weren’t designed to evaluate.

Industry surveys indicate that GANs are already being used to generate counterfeit identity documents, utility bills and invoices. One fintech platform reported that AI-generated documents accounted for as much as 70% of detected document fraud attempts within its system. Human fraud perpetrators are now using GAFPs as their partners in white-collar crime.

Fraud at scale and speed

Generative systems transform fraud from a manual craft into an automated process. Consider this operational scenario:

  1. A criminal obtains a legitimate name, government identification number and date of birth from a data breach.
  2. They enter that information into a GAN-based identity generation system with a prompt such as, “Create a pay stub for employer X in city Y.”
  3. Within seconds, the system produces a polished document containing logos, line items and a matching bank-encoded magnetic ink character recognition (MICR) line.
  4. The document is submitted with a loan application and passes conventional validation checks.
  5. The synthetic identity makes an initial payment to establish apparent creditworthiness before being used for larger-scale financial exploitation.
  6. At each stage, the fraudulent artifact is produced algorithmically. Human involvement is limited to selecting inputs and writing prompts, while the generative model deploys the deception. The model functions as an active perpetrator in the fraud scheme. 

Why legacy detection fails

Traditional fraud controls emphasize surface-level inconsistencies such as grammatical errors, formatting irregularities, mismatched logos or missing metadata. As generative models improve, these indicators diminish. Outputs may appear coherent, visually consistent and free of obvious defects.

Some vendors are responding by deploying deep learning-based detection systems. Deep learning is a type of machine learning designed to resemble the human brain with layers of neural networks. Mitek Systems, for example, reports it uses pixel-level analysis to identify compression anomalies, texture inconsistencies and blending artifacts that may indicate synthetic generation. Similarly, 2025 research by Zong Ke and his colleagues demonstrates that GAN-based discriminators can detect manipulated payment images and deepfake artifacts with high accuracy by identifying subtle statistical anomalies invisible to human reviewers. However, real-world deployment remains an evolving challenge.

A growing ecosystem of firms, including Reality Defender, Hive AI, and watermarking initiatives such as Google’s SynthID, are developing detection tools.

Unraveling synthetic artifacts

Several techniques are available to fraud examiners to identify AI-generated fraud artifacts, including:

  • Micropattern anomaly detection. Synthetic images may exhibit repeated textures, unnatural smoothness or color channel inconsistencies detectable through noise residual analysis (examining differences between observed data and expected data). This is analogous to examining brush strokes in a painting and can reveal hidden structural differences between synthetic and authentic images.
  • Latent signature analysis. Generative models often leave statistical fingerprints in latent space representations, which can function as hidden, compact representation of data and can provide for the model to learn the underlying structures rather than just memorizing input. This can be helpful to fraud examiners by allowing them to link multiple documents to a common generator.
  • Semantic coherence testing. Even realistic artifacts may contain inconsistencies in business logic or operational context, such as implausible expense ratios or geographically inappropriate items like invoices for snow removal services in tropical climates. These can indicate synthetic generation.
  • Metadata and provenance verification. AI-generated documents frequently lack authentic exchangeable image file format (EXIF) data or exhibit standardized metadata fields. EXIF metadata, embedded in image files via cameras and smartphones, can help confirm date, time, location and device origin.
  • Cross-sample correlation analysis. Comparing multiple artifacts can reveal similarities that suggest automated generation or coordinated fraud campaigns.
  • Adversarial detection models. Detection systems, trained specifically to identify outputs from other GANs, can enhance resilience through continuous retraining that’s necessary as generative techniques evolve. Some of the models can be used with both financial and nonfinancial data.

Generative artificial fraud

Adapting fraud control frameworks

Practitioner experience and industry collaboration are necessary to develop fraud control programs that address generative-scale deception, especially to:

  • Promote forensic readiness by preserving original documents and metadata.
  • Integrate probabilistic detection models trained on synthetic artifacts.
  • Expand validation beyond documents to behavioral and transactional analytics.
  • Conduct adversarial testing to simulate synthetic fraud.
  • Strengthen vendor and third-party risk governance.
  • Foster cross-disciplinary collaboration among fraud professionals, data scientists and forensic imaging specialists.
  • Retrain detection systems continuously as generative models evolve and update controls.

AI models as fraudsters

Generative adversarial crime marks the next frontier of fraud — deception by neural networks. GANs, large language models and diffusion systems can now produce documents and identities that are indistinguishable from legitimate records under traditional controls. As described by IBM, diffusion systems gradually “diffuse a data point with random noise, step-by-step, until it’s destroyed, then learn to reverse that diffusion process and reconstruct the original data distribution.” Because of the availability of such tools for use by criminals, fraud detection must shift from identifying flawed documents to identifying synthetic generation and abnormal behavior.

Forensic tools must evolve to analyze textures, metadata, latent fingerprints and systemic consistency. Control systems must assume that any document — even one that appears perfect — may be synthetic and quite likely is. The war on fraud has become a war against GAI models. The adversaries now are both the person committing fraud and the generative artificial fraud perpetrator.

Zachary Kelley is an associate professor of instruction in the Department of Information Systems and Analytics at Texas State University in San Marcos, Texas. Contact him at zachkelley@txstate.edu.

Carolyn Conn, Ph.D., CFE, CPA, is a clinical associate professor in the Department of Accounting at Texas State University in San Marcos, Texas. Contact her at cc31@txstate.edu.

Begin Your Free 30-Day Trial

Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.