
The grand scheme of things
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
Robots have been playing chess for decades. On May 11, 1997, IBM supercomputer Deep Blue made history as the first computer to beat Garry Kasparov, the world chess champion at the time, in a six-game match under the World Chess Federation standard time controls. (See Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution, by Mark Robert Anderson, May 11, 2017, The Conversation.)
Computers now are taking on other computers or robots. On Dec. 7, 2017, Google’s artificial intelligence (AI) subsidiary DeepMind challenged then world champion chess-playing program Stockfish 8. AlphaZero, the game-playing AI created by DeepMind, taught itself how to play chess in less than four hours and beat Stockfish 8 in a 100-game match up. AlphaZero won or drew all 100 games. (See AlphaZero AI beats champion chess program after teaching itself in four hours, by Samuel Gibbs, Dec. 7, 2017, The Guardian.)
According to The Guardian article, DeepMind’s programmers didn’t give AlphaZero any human input except for the basic rules of chess, but it achieved a superhuman level of play in chess and shogi (a similar Japanese board game) within 24 hours. The difference between AlphaZero and its competitors is that DeepMind doesn’t give its machine-learning approach any human input apart from the basic rules of chess. It learns the rest by playing itself over and over using “self-reinforced knowledge.”
Self-reinforced knowledge is coming in handy not just for chess-playing robots — the anti-fraud profession is using the same technology to help fraud examiners detect fraudulent behavior before it costs organizations millions in losses.
Since Dr. Joseph T. Wells, CFE, CPA, founded the ACFE in 1988, he’s concentrated not just on fighting fraud but providing strategies for preventing it.
“Rules-based analytics and many mainstream fraud examination tools are really good at identifying red flags and frauds that have occurred — the inside of the Fraud Triangle,” says Jeremy Clopton, CFE, ACDA, CIDA, CPA, owner of What’s Your SQ.
“However, if we really want to make a difference in anti-fraud — not just detecting, but mitigating the risk and trying to prevent fraud — we have to get outside the Fraud Triangle and use cutting-edge AI and machine learning to start proactively identifying pressures, opportunities and rationalizations plus anomalies and risk patterns,” Clopton says.
“For instance, machine learning is very good at detecting anomalies in how people are communicating — through emails, texts, etc. — to identify indications of involvement,” Clopton says. “It can detect changes in emotional tone and sentiment. For example, when communication goes from being very direct and forthright, to suddenly being very evasive. Or they’re suddenly nervous, they’re vague, the frequency of communication changes, who they’re communicating with changes, the concepts/topics they discuss start to evolve.
“We need to be able to identify the sides of the Fraud Triangle coming together before fraud occurs to prevent fraud,” he explains. “If we can’t identify that, we’re always going to be reacting to fraudulent activity.”
“Artificial intelligence is the study of agents that perceive the world around them, form plans and make decisions to achieve their goals,” according to Machine Learning for Humans, by Vishal Maini, Aug. 19, 2017, Medium. According to Maini, its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience and decision theory. AI includes robotics, machine learning and natural language processing.
Machine learning is a subfield of artificial intelligence. According to Maini in the Medium article, its goal is to enable computers to learn on their own. “A machine’s learning algorithm enables it to identify patterns in observed data, build models that explain the world, and predict things without having explicit pre-programmed rules and models,” Maini writes.
James Ruotolo, CFE, a director and analytics software leader at SAS, says companies have been using similar technologies for a long time — they just haven’t been calling them machine learning or AI.
James Ruotolo, CFEIn the last 15 years or so we’ve taken those tools we used for business analytics and started to apply them to very specific business problems.
“Historically we’d be providing you with a generic set of analytical tools and you could do any kind of analysis you want. We still allow you to do that, but more recently we’re also building these tools specific to detecting fraud, identifying money laundering activity and to risk management,” Ruotolo explains.
According to Ruotolo, to grasp how AI and machine learning work, we have to first understand data in its multiple forms:
To further comprehend how contextual analysis works, Ruotolo provides an insurance-claim example. An AI model looks at the notes in a claim file for an automobile accident to predict severity. How much will it cost the insurance company? How serious will the injuries be? This is all based on text and keyword analysis. If the system does a keyword search only for “ambulance” that’s insufficient — the text might say “ambulance was called to the scene” or it might say “no ambulance was necessary.” Both instances contain the word ambulance. “What the software has to be able to do is understand the context and usage of those key words within the sentence structure,” Ruotolo explains. “That’s sentiment analysis or contextual analysis.”
Heidi Stenberg, principal of the Fraud Investigation and Dispute Services practice at EY, explains that AI and machine learning can also study all aspects of a conversation — the words and context — to identify patterns. “In investigations, we use AI and other analytics technologies to detect inherent meanings within data and reveal hidden relationships and patterns of behavior,” she explains. “Machine learning is regularly applied to fine tune the analytics models to reduce false positives over time.”
Ruotolo says two important factors are necessary for machine learning and AI to work well: 1) automating that capability and 2) making it adaptive or essentially self-learning. “You’re not only automating the process — that it’s happening constantly or in real time — but it’s also taking information as it changes over the course of that time and then modifying its capabilities. It’s learning as it finds new information,” says Ruotolo.
If an organization has an AI system that’s producing alerts or red flags of potential fraud, investigators will look at those alerts and decide, yes, this is a case we should investigate or, no, this is a false positive. Over time that information feeds back into the model, and the model self-adjusts. It then decides that every time it produces an alert for this reason, it’s a false positive. So, it will delete that variable because it’s not proving to be useful. The model’s constantly reevaluating its own performance based on that feedback and will adjust itself.
“That’s really what makes AI the more powerful capability because you don’t have to go back in and manually refresh it or change the model as we have typically done in the past,” says Ruotolo.
Viktor Mirovic, CFO and co-founder of KeenCorp, says its software solutions are using these powerful capabilities to identify tension in employee communications to measure engagement. He explains that the technology captures a different kind of signal. “When tension rises, we usually reflect back to emotions,” he explains. “Once we get into that emotional state, we respond differently. When you and I start to work together, it might take a couple days for you to understand my rhythm. You’ll then start to understand how to calibrate your connection with me to make things happen. In a larger group, when that rhythm is broken because of stress, management should be alerted because something is going on.”
To do this, KeenCorp’s software gathers company communications, groups them (by department, region, etc.) and anonymizes or cleans the data of personal employee information, including names, for privacy. It then calibrates, or creates a baseline, of a standard tone of communication within in each group. When tension rises within a group, the software raises a red flag.
“We establish a baseline for the typically culturally sensitive areas of the company, we calibrate those areas and when the baseline pattern is broken, our signal will wake up,” he says. “That’s the first sign, the check-engine light.” The software doesn’t tell you exactly what’s going on, the individual signals, and it won’t store individual messages. “It’s a way to get an early warning signal to stakeholders that you should start a dialogue. You go from reactive to proactive,” says Mirovic.
Mirovic and his team beta-tested their software in 2013 using public Enron emails. They brought in Andrew Fastow, former Enron CFO and convicted fraudster, to show him where their software detected times of tension. “He gave us the backend to the story, which gave us so much insight,” Mirovic explains. When they pointed out one major spike in their data, Fastow recognized that the spike matched the day he set up LJM, a company he created in 1999 to buy Enron’s poorly performing assets and bolster Enron’s financial statements. “Andy knew right off the bat that was it. He could correlate it for us.
“We are just scratching a new surface in the data transformation world,” says Mirovic. “There are still a lot of developments that we can learn from other disciplines where technology is already mainstream.”
Stenberg says that this digital transformation is affecting all aspects of business. “New ways of working within organizations are driving and impacting both e-discovery and the anti-fraud profession because we have new places and new sources of evidence — Microsoft’s Yammer, social media, collaborative tools such as Slack — to look at and find potentially relevant information,” she says. “To bring disparate data sources together and draw insights from the combined data, firms like us are compelled to seek out advanced technologies. However, technology alone isn’t enough. We also need to build a new talent mix that possesses both technology know-how and business acumen in order to effectively manage fraud risk in today’s digital world.”
Anti-fraud professionals also are now looking for information in new places (the cloud, mobile devices, etc.). Because data volumes have become so large, Stenberg says AI and machine learning will make fraud examinations more efficient and less expensive. “The number of transactions generated far and away surpasses many of the traditional tools that are available to investigators to use to analyze information,” she says. “They really aren’t sufficient anymore."
Heidi StenbergThe data is now coming from different and disparate sources that you have to triangulate together to review, analyze and derive meaningful insights.
Stenberg says education is really at the heart of understanding the data landscape. Fraud examiners should be mindful of the rapid changes in the types of data they need to understand and how their organizations are working with these data sources. Old, traditional methods simply have to evolve. “We call it data blending,” she says. “Looking at multiple data sources and incorporating them into an analytics platform for review to gain the most insights.”
Fraud examiners also must become more proficient in understanding available visualization techniques — the analytics tests and the algorithms. Many of the algorithms work toward having some human intervention to test the models. According to Stenberg, fraud examiners will go through a lot of trial and error as they’re working through the tools.
“Most of the new visualization technologies support human interaction,” says Stenberg. “Visual dashboards usually come with capabilities for users to provide input of analytics results. Machine-learning technologies will always require some level of human intervention to help the algorithms improve. But the dependency on human input will ultimately become minimal as the false positives reduce over time.”
According to Clopton, this human intervention needs to be developed to a point where non-programmers find it as easy to use as a program that functions as “point and click.”
“We’re not going to have fraud examiners who are also programmers and software developers in every organization in the world,” Clopton explains. “These technologies need to be user-friendly and easy to use — like the Excels of the world. Fraud examiners need to be able to comprehend it and have confidence in the model.”
According to Clopton, successful implementation will take a few things:
“And I think we’re working in that direction,” he says.
Red flags are only as good as the people who look for them and decide it’s time to investigate. While AI and machine-learning technologies are certainly beginning to make an impact in the anti-fraud profession, it’s clear that fraud examiners shouldn’t set up the software and let it run on its own.
“Companies are dealing with red flags already. That’s not unusual,” says Clopton. How do they handle them? Who do they send in? “Just because a red flag is generated by a machine-learning algorithm versus standard analytics or an interview doesn’t change how you respond to it. It’s just coming from a different system,” he says. The technology used to identify the red flags doesn’t change the response. You’re still relying on the fraud examiner to respond.
On the preventive side, in theory, you’re catching things earlier, according to Clopton. You might find smaller red flags. And you have to make sure you don’t let materiality blind you to the bigger risk. “If we expand that detection perimeter and we’re identifying fraud before it has a significant impact on the organization, by definition we should be catching it before it’s material,” he explains. “If we’re catching it before it’s material we have to make sure that whoever goes in doesn’t look at it and say it’s not high risk.”
In the movie, “2001: A Space Odyssey,” the fictitious sentient computer, HAL 9000 (Heuristically programmed ALgorithmic computer) that controls the Discovery One spacecraft to Mars, malfunctions because it analyzes faulty data. HAL proceeds to kill all the astronauts except for Dave who shuts HAL down. Modern robots can help us detect fraud, but they still rely on accurate data. If we can supply them with the right inputs, revelatory robots can be some of our greatest assets.
Emily Primeaux, CFE, is associate editor of Fraud Magazine. Contact her at eprimeaux@ACFE.com.
Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
Read Time: 7 mins
Written By:
Patricia A. Johnson, MBA, CFE, CPA
Read Time: 10 mins
Written By:
Bret Hood, CFE
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
Read Time: 7 mins
Written By:
Patricia A. Johnson, MBA, CFE, CPA
Read Time: 10 mins
Written By:
Bret Hood, CFE