Article

Generative AI and the Fraud Examiner

By Samuel May May 09, 2023

This article will focus on the current capabilities of chatbots such as ChatGPT, Deepmind and Google’s Bard. These chatbots represent a significant step forward for artificial intelligence (AI) and their applicability for common, day-to-day use.

Chatbots are large language model (LLM) AIs, neural networks that are trained (either self-supervised or with guidance and controls in place) on large quantities of text-based information. Currently, users communicate with the chatbot through text, though input will soon come from images, videos and other mediums as the capabilities of these AI increase.

With any new advancement, it can be nearly impossible to accurately predict how prevalent, successful or long-lasting the use of a technology will ultimately be. Proponents of the current field of AI chatbots suggest that they will significantly increase productivity across virtually all job functions and provide an immensely powerful new tool to humanity. Opponents to the availability and use of these AI are more focused on the potential dangers of its expansion; even they believe the technology could be implemented across a remarkably diverse range of fields.

If you haven’t had the chance to play with one of these chatbots yet, it is likely that you have seen video or articles discussing their use. In fact, the ACFE’s Fraud Magazine has an article discussing the use of ChatGPT by cybercriminals in the May/June 2023 edition with some interesting inputs and AI responses.

Generally, after users sign up and are given access to the AI, they are provided with a short list of examples, capabilities and limitations before being directed to a blinking cursor in an empty text box. Text input is only limited by the imagination of the user. Depending on which chatbot you use, the responses can be limited by the creators of the AI. ChatGPT, for instance, will refuse to engage in “offensive behavior or language towards individuals or groups” if prompted to write something vulgar or derogatory. Increasingly, these guardrails are only minor impediments to getting what you want from the AI; workarounds (or “jailbreaks”) are promulgated across the internet, from asking the AI to assume a persona to phrasing the output as a movie script or hypothetical.

A warning at the beginning

Remember that anything typed into the text box of one of these AI tools is no longer private. Fraud examiners should be extremely careful with questions they ask and any information they feed into their prompts.

How can fraud examiners make use of these chatbots?

Fraud examiners will need to pay attention to the capabilities and advancements of these AI technologies not just for their own use, but because they are already being leveraged by fraudsters (see Is ChatGPT the newest gateway to fraud?). The current slate of generative AI chatbots are essentially new tools that fraud examiners (and perpetrators) can and should learn to use.

Perhaps the easiest on-ramp for fraud examiners looking to learn to use these AIs is asking about fraud investigation fundamentals. Moving into a new field or investigating a new type of business? Ask the AI chatbot what you should look out for, what kind of frauds have occurred, how these specific kinds of businesses operate. At the very least, the chatbot can provide some useful search terms or definitions that can help establish a foundation for future learning. Ask Google Bard to explain how a kickback scheme works in a healthcare setting; its answer provides a basic definition of kickbacks and several common fraud schemes. These quick and easy answers should be used as a suggestive framework, not the gospel truth. Any information provided by a chatbot could be wholly false or grossly misleading. Verify any information it provides and conduct your own searches (and vet your own sources) for anything you will ultimately rely upon.

The simplest demonstration of the dangers of relying on an AI chatbot’s research is to ask it about yourself. Start up a fresh chat with Bard and ask it to tell you about you. Based on a limited and not particularly random sample, the information Bard returned on different known individuals had numerous errors and inconsistencies. While fraud examiners might be tempted to shortcut conducting their own open-source intelligence (OSINT), these chatbots are not (currently) the best tool for the job.

What does the AI think?

This article would be incomplete without what is now the standard approach when discussing generative AI: copying and pasting answers from a chatbot. While an AI was not asked to write portions of the article as some form of comparison, ChatGPT and Bard were both prompted with the question: “How can fraud examiners use generative AI?” The chatbots provided interesting responses:

Google Bard provides three uses and, interestingly, a few “use cases”:

Generative AI can be used by fraud examiners in a variety of ways, including:

  • Identifying fraudulent transactions: Generative AI can be used to identify fraudulent transactions by analyzing large amounts of data to identify patterns that are indicative of fraud. For example, generative AI can be used to identify credit card transactions that are likely to be fraudulent based on factors such as the amount of the transaction, the merchant and the customer's purchase history.

  • Investigating fraud: Generative AI can be used to investigate fraud by helping fraud examiners to quickly and efficiently gather and analyze evidence. For example, generative AI can be used to identify potential witnesses, to locate relevant documents and to trace the flow of money.

  • Preventing fraud: Generative AI can be used to prevent fraud by identifying and addressing the root causes of fraud. For example, generative AI can be used to identify vulnerabilities in payment systems, to develop training programs for employees on how to spot fraud and to create targeted prevention campaigns.

Here are some specific examples of how generative AI is being used by fraud examiners today:

  • Bank of America: Bank of America is using generative AI to identify fraudulent credit card transactions. The bank's AI system analyzes billions of transactions every day to identify patterns that are indicative of fraud. For example, the system can identify transactions that are made from unusual locations or that are for unusually large amounts of money.

  • PayPal: PayPal is using generative AI to investigate fraud. The company's AI system can quickly and efficiently gather and analyze evidence, such as emails, phone calls and social media posts. This helps PayPal to identify and prosecute fraudsters more quickly.

  • Visa: Visa is using generative AI to prevent fraud. The company's AI system is constantly scanning the global payments network for signs of fraud. If the system identifies a potential fraud, it can automatically block the transaction.

Generative AI is a powerful tool that can be used to combat fraud. As the technology continues to develop, it is likely that fraud examiners will find even more ways to use it to protect their organizations from fraud.

 

Fraud examiners would do well to take any “specific examples” provided by chatbots with copious amounts of salt. On a previous day, asking the same question, Bard provided a different list of three real life examples, one of which was the ACFE itself. Bard stated that the ACFE was actively using generative AI in reviewing data. This was a wholly incorrect statement.

 

ChatGPT provides:

Fraud examiners can use generative AI in several ways, including:

  • Fraud detection: Generative AI can analyze large amounts of data and identify patterns and anomalies that may indicate fraudulent behavior. For example, generative AI can be trained to recognize fraudulent financial transactions or detect the use of false identities.

  • Synthetic data generation: Fraud examiners can use generative AI to create synthetic data sets that mimic real-world scenarios. These synthetic data sets can be used to test fraud detection systems, train machine learning models, and conduct simulations of potential fraud scenarios.

  • Fraud prediction: Generative AI can analyze historical data to predict future fraudulent activities. By analyzing patterns in data and identifying potential risk factors, generative AI can help fraud examiners anticipate and prevent fraudulent behavior.

  • Fraud investigation: Generative AI can assist fraud examiners in investigating fraudulent activities by generating possible scenarios and identifying possible suspects. For example, generative AI can analyze email communications or social media activity to identify potential fraudsters or uncover hidden connections between suspects.

It's worth noting that while generative AI can be a powerful tool for fraud examiners, it's important to use it ethically and responsibly. Generative AI should be used in conjunction with human expertise and judgment to ensure that the results are accurate and unbiased.

While machine learning does, and generative AI will, play a large role in data analysis and the detection of fraud, examiners should refrain from feeding any of the currently available chatbots real data for fraud detection. New tools might be developed, with sufficient safeguards in place, that end up in the fraud fighting toolkit, but the current slate of chatbots is not a proper venue for fraud detection.

The creation of synthetic data, hypothetical fraud scenarios and what could essentially be “practice problems” for fraud examiners is an interesting idea and one use that poses considerably less threat to the safety of private or proprietary data. Fraud examiners looking to sharpen their data analysis techniques could turn to generative AI for producing case studies with known answers instead of relying on de-identified or test data that currently exists in limited supply through internal or external training sources.

Helpful uses

Fraud examiners might turn to chatbots and generative AI to assist in producing written reports or documentation. The number of instances of ChatGPT being used for school essays, clickbait articles and an entire swath of legitimate and illegitimate writing increases every day. Fraud examiners should, again, refrain from prompting AI with sensitive information to create a report, but fraud examiners should also be wary of plagiarism or lazy copy and paste jobs. Fraud reports are often the culmination of a laborious, intensive, complicated fraud examination and, while writing everything down could be the least “fun” part of the job, it is also the memorialization of everything that was done. AI-generated writing still comes replete with errors and inhuman, out-of-place phrasing. The creators of ChatGPT have gone so far as to release a tool capable of distinguishing between human work and AI generated text.

Outside of the written word, fraud examiners can jumpstart their programming or data analysis capabilities. Excel formulas, for instance, can be incredibly useful time savers when dealing with large data sets or crunching numbers. For fraud examiners who don’t use them routinely, there is often some trial and error when trying to get known formulas to spit out the necessary information. Chatbots can be prompted with common language directions to return complex formulas. For the more tech-savvy fraud fighters, generative AI have proven incredibly useful for software development, once again taking basic commands and translating them into the requested programming language. Do you need to write a quick script or automate a frequently used command? Not even sure what that means? Generative AI can help explain, create and implement basic programs.

What’s next?

While ChatGPT might be a passing trend, a diversion to throw questions at when you’ve exhausted your social media updates, perhaps, generative AI is ramping up and pushing itself into everyday life. Fraud examiners are likely already seeing or will shortly be made aware of the use of AI chatbots in the commission of fraud. Fraud examiners can look to current chatbots to expand their skillsets, possibly save some time and, eventually, help combat fraud at every level. AI might not be there yet, but chatbots have already seen an explosion in use and capability in their short time on the stage. The ACFE will keep an eye on what comes next and work to keep fraud fighters informed.