
The Fraud Triangle on trial
Read Time: 14 mins
Written By:
John D. Gill, J.D., CFE
During their careers, fraud examiners will inevitably run into major problems on a high-profile project such as a forensic investigation. Human interaction can be a messy affair and often a major hindrance in project management. Here are tips to minimize cognitive bias and other psychological pitfalls that sabotage the best-laid plans.
If you’ve ever attended project-management training, you learned about the factors that help control the various project phases: planning/goals, project scope, timeline, budget, work breakdown structure (WBS), quality, communications and the project team. Following these practices helps place guardrails on your project, barring of course a highly improbable and consequential occurrence, commonly known as a “black swan event.”
I once believed that a project’s weakest link was “scope creep,” the uncontrolled growth of a project that accommodates the need for more labor, budget and time. I quickly learned, however, that this had an easy solution. Just ask the project stakeholder “which of these other priorities should I cut back on to make room for this additional unplanned effort?” That usually got them to reconsider their request.
When I was leading projects at Motorola, my team and I attended two weeklong programs, “Managing Projects in Large Organizations” and “Project Risk Assessment,” administered by George Washington University. On day one, an instructor made this memorable statement: “Too few people on a project can’t solve the problems; too many create more problems than they solve.”
And so, we were cautioned.
That’s when I learned that the weakest links in the project management chain are people. Even when you budget for extra time, a project is likely to take longer than you expected either because of a post-facto decision — or a failure to make one. No matter how focused and committed the team may be, projects often stumble due to bad decisions. And the more people involved, the more likely the project will suffer from “variance” — project-management parlance for the difference between what was planned and what actually happens. (See “Variance,” Project Management Knowledge.)
“People are indeed the weak link in managing projects,” Jeremy Clopton, CFE, director of Upstream Academy, tells Fraud Magazine.
“Individuals’ unconscious biases, previous experiences, and personal beliefs will influence how they approach each forensic project, even if that isn’t the intent. This weakness is likely most prevalent in determining where to look for fraud, evaluating the types of fraud that may or may not be possible, and the plausibility of other explanations.”
We can’t help it if our first inclination is to run to the “been there, done that” answer or solution. The brain selects that which is familiar and quickly accessed from memory. It’s called cognitive bias.
Individuals on a project team may favor different approaches to even the smallest task on the WBS. And while rules to restrict subjective judgment and variability can help keep inconsistencies at a minimum, the decision-making process remains vulnerable to all types of psychological pitfalls. These include a tendency for our brains to jump to System 1 “rapid cognition” thinking. That, in turn, can lead to systemic, predictable errors of judgment if the more rational, logical (and slower) System 2 thinking isn’t invoked as a check on System 1 impressions. (See “Thin-Slicing Experience,” by Donn LeVie Jr., CFE, Fraud Magazine, Inside the interview, September/October 2021.)
There is also a type of cognitive bias known as the Dunning-Kruger effect (see sidebar at the end of this article). It appears when people with limited knowledge or competence in a field (fraud investigation, for example) critically overestimate their own abilities in that specialty when compared to objective standards or performance of their peers. (See “Dunning-Kruger effect” in Britannica online.)
Project managers would do well in taking heed of Charles Darwin’s well-known quote: “Ignorance more frequently begets confidence than does knowledge.”
Ryan C. Hubbs, CFE, global anticorruption and fraud manager for Schlumberger and former ACFE Board of Regents member, describes an interesting variation on the Dunning-Kruger effect. “If we were to really dissect big project failures, we would probably find several instances where managing by position or seniority was a contributing factor in the failures, and managing by experience and expertise had less instances of failure,” he says.
“A title alone does not imbue the holder with instant knowledge. Yet some individuals may exhibit ‘position bias’ or evoke the ‘I’m the boss’ attitude in making critical decisions.”
Source: Donn LeVie, Jr., CFE
If the Dunning-Kruger effect wasn’t enough to worry about, we humans are also unreliable decision-makers simply because a whole host of external factors can influence our moods while we work on a project.
Concern about the weather, worrying about the expense of a new roof, or anxiety over that upcoming root canal can change our disposition. When that happens, the brain unleashes serotonin, dopamine, glutamate and noradrenaline, all of which have an effect on our judgment — as do adrenaline, cortisol and melatonin. (See “How brain chemicals influence mood and health,” UPMC, Sept. 4, 2016.) It’s called cognitive noise.
Nobel Prize-winning psychologist Daniel Kahneman, co-author of “Noise: The Flaw in Human Judgment,” has highlighted how psychological or cognitive noise (as separate from bias) impacts our judgment. Kahneman, along with his co-authors, Cass Sunstein and Olivier Sibony, cite research where judges pronounced stiffer sentences for juvenile offenders on Monday morning if the local football team lost a game over the weekend. (See “Judges give harsher penalties when their favorite football team loses unexpectedly,” by Mihai Andrei, ZME Science, July 6, 2018.) The book also illustrates how judges are also more likely to hand out harsher punishments if they’re hungry. When cognitive bias is ruled out, such correlations (not necessarily causation) are likely attributable to one thing: noise. (See “Dissecting “Noise,” by Vasant Dhar, Los Angeles Review of Books, Aug. 9, 2021.)
System noise consists of two components: level noise and pattern noise. Nearly every decision-making process involves level noise — how people have different levels of judgment in a particular system. Employee performance evaluations are a good example of level noise. One manager may be magnanimous in their evaluations of subordinates, while another — using identical evaluation criteria — may be more mean-spirited. We see the same ambiguity with “on a scale of 1 to 10” ratings. One manager’s employee ranking of a “7” is another manager’s ranking of a “5” for the same criteria. (See “Daniel Kahneman Says Noise Is Wrecking Your Judgment. Here’s Why, and What to Do About It,” by Beverly Goodman, Barron’s, May 28, 2021, and “How noisy is your company?” by Theodore Kinni, strategy + business, Business Books, May 19, 2021.)
Pattern noise is harder to predict and is a significant source of inconsistent decision-making. It results from how people see the world differently from one another. We believe what we believe for various rational and irrational reasons justifiable only to ourselves. As Sibony describes it, pattern noise comes from our idiosyncrasies. It’s why a tough judge might be more lenient with white-collar criminals. (See “Beyond Bias with Olivier Sibony,” The Decision Lab, May 24, 2021.)
Take the recent Winter Olympics, for example. Any event where judges are involved introduces level noise, pattern noise and occasion noise (when, for example, an external event like winning the lottery changes the mood of the judge). Each of the judges has their own individual interpretation of objective ranking criteria when judging athletes, which is level noise. When judges disagree individually as to whether an athlete should advance to the medal round or go home, they exhibit pattern noise. When the judges disagree among themselves, they display occasion noise.
That’s why judging scores for skating individuals or pairs has variability. Judges can check all the objective criteria boxes on the score cards. But having to assign a numerical score is where noise comes into play — especially in situations where judges harbor strong nationalistic tendencies.
In “Noise,” the authors describe a process for reducing noise in decision-making:
Years ago, I facilitated meetings at Intel Corporation to address defects and bugs during the microprocessor design process. At these meetings, which were formerly run by engineering project leads, engineers would get bogged down in discussions about each defect. (Imagine 10 people on a team with 10 different opinions on whether an “issue” was a true defect.)
By the end of a two-hour meeting, they’d only discussed a handful of the 60 to 100 logged defects. The engineers were trying to solve individual defect problems too early in the process, and the project leads were only supposed to lead the meetings, not be involved in the discussions.
Engineering management saw that there was an issue with how the meetings were run, so I offered to run them instead. I allocated a time limit for each defect discussion — if the issue couldn’t be resolved quickly by consensus at the end of two minutes, I would appoint two engineers to examine the defect outside of the meeting and report back at the next scheduled meeting. That way, we were able to get through 100 or more defects in a two-hour meeting by preventing participants from jumping to conclusions and trying to find remedies on the fly. The former approach was too noisy and costly; the later approach was less noisy and less costly.
Can a rules-based approach to decision-making using bias-free and noise-free algorithms do a better job than humans? According to Kahneman, decidedly so. The short answer is because mechanical approaches are free of noise. (See “Should Humans Be More Like Machines?” by Arnold Cling, Econlib, Book Review, Aug. 2, 2021.) That said, the best managers are arguably necessary. They can look beyond the noise and successfully bring together the expertise of their team to reach a successful conclusion to any project.
“Without individuals’ expertise, judgment, and intuition — and being properly aware of the influence of cognitive bias and noise — we would not move from evaluation to conclusion in a project,” says Clopton. “The most successful examiners can balance the strengths and weaknesses of their people to lead a forensic project.”
And finally, one last pearl of wisdom from the George Washington University project management instructor on working with a team of people with different psychological propensities. “Change is inevitable — except from vending machines.”
Donn LeVie Jr., CFE, is a Fraud Magazine staff writer, and a presenter and leadership positioning/influence strategist at ACFE Global Fraud Conferences since 2010. He’s president of Donn LeVie Jr. STRATEGIES, LLC, where he leads programs and speaks on executive influence techniques and situational-influence strategies. His website is donnleviejrstrategies.com. Contact him at donn@donnleviejrstrategies.com.
Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.
Read Time: 14 mins
Written By:
John D. Gill, J.D., CFE
Read Time: 7 mins
Written By:
Robert E. Holtfreter, Ph.D., CFE
Read Time: 18 mins
Written By:
David L. Cotton
Sandra Johnigan
Leslye Givarz
Read Time: 14 mins
Written By:
John D. Gill, J.D., CFE
Read Time: 7 mins
Written By:
Robert E. Holtfreter, Ph.D., CFE
Read Time: 18 mins
Written By:
David L. Cotton
Sandra Johnigan
Leslye Givarz