The Association of Certified Fraud Examiners (ACFE), in partnership with SAS, has released the
2026 Anti-Fraud Technology Benchmarking Report, offering insight into how organizations are using technology to detect and prevent fraud, as well as the challenges they face in implementing new tools.
Now in its fourth edition, the report draws on survey responses from Certified Fraud Examiners (CFEs) and anti-fraud professionals around the world to examine current practices, emerging technologies and areas where organizations continue to build capability.
Organizations Continue Expanding Use of Artificial Intelligence (AI) and Data Analytics
The report indicates that organizations are continuing to integrate technologies such as AI and machine learning into their anti-fraud programs.
According to the survey, one in four organizations (25%) currently use AI or machine learning in their data analysis initiatives, up from 18% of organizations observed in the 2024 study. An additional 28% expect to adopt these tools within the next two years.
Generative AI is also being used in a range of anti-fraud applications. Among respondents whose organizations currently use these tools, the most common uses include phishing and scam detection (49%), risk identification and assessment (46%), and report writing (45%).
Increases in AI-Enabled Fraud Schemes
In addition to examining how organizations are using technology, this year’s report also explores for the first time how fraudsters are leveraging AI and emerging tools to commit fraud.
More than half of respondents indicated that the volume of common AI powered fraud schemes has increased over the past two years. At least two-thirds expect those schemes to increase again over the next two years.
Respondents most frequently cited increased instances ofdeepfake social engineering and consumer frauds and scamsimpacting organizations. Looking ahead to the next two years, generative AI document fraud and forgery, deepfake social engineering and deepfake digital injection were among the schemes most expected to grow.
AI Preparedness and Governance Remain Areas of Focus
While many organizations are adopting new tools, the report highlights areas where respondents indicate continued opportunity for development. Only 7% of respondents believe their organization is more than moderately prepared to detect or prevent AI-powered fraud.
The findings also point to ongoing considerations related to governance. For example, 75% of respondents said bias or lack of fairness is an important factor when adopting AI, but only 18% reported that their organizations test AI models for bias or fairness.
Similarly, while most respondents indicated that explainability is important, relatively few reported a high level of confidence in explaining how AI or machine learning models arrive at decisions.
What Do These Findings Mean for Fraud Fighters?
The findings in this report point to a set of questions many organizations are already working through:
- How quickly should new tools be adopted and where do they meaningfully improve outcomes?
- What does effective oversight look like as AI becomes more embedded in anti-fraud workflows?
- How can teams balance growing expectations with constraints around budget, data and personnel?
For anti-fraud professionals, these considerations inform critical decisions that shape how programs are built, resources are allocated and risks are addressed at both micro and macro levels.
The full report breaks down trends across industries, regions and organizational sizes, offering more context on how you can approach decisions about technology and where challenges persist.
Download the Full Report