Research Findings

Glean practical applications from these incisive papers

Getting ahead of fraudsters often requires staying current with the latest developments in criminal methods, advances in technology as well as academic research. This column offers a diverse sampling of four recent research studies that might benefit your daily efforts to curb fraud, improve detection and contribute the most value to your limited time. The brief review of these research studies extracts some of the most practical aspects of recent academic literature with a link to each paper for you to obtain more details.

“Man vs. Machine: Investigating the adversarial system use on end-user behavior in automated deception detection interviews” (2016) by Jeffrey Proudfoot, Randall Boyle, and Ryan Schuetzler

In basic terms, this study examined the behavioral countermeasures people use when they’re forced to respond to automated detection tests. As technology advances and the means to evaluate deception through sensors and systems improve, these researchers evaluated how people with guilty knowledge adjust their responses to adversarial detection systems.

Consider the detection tests in our everyday lives that are compulsory, such as airport body-scanners, breathalyzer tests, eye exams at the department of motor vehicles and polygraph tests. These types of tests are characterized as adversarial in which people are “placed in situations in which they must interact with the system, have no control over the data that are collected, and could be subject to a punitive outcome.”

The research addressed how government and law enforcement continue to move toward automating deception detection systems that are unobtrusive, cost effective and accurate enough for scientific and legal applications.

What these researchers wanted to examine, in part, was how people are responding and adjusting to the growing number of adversarial systems in our lives. Specifically, the researchers examined the variations in behavior that occur when people interact with deception detection systems. The behavioral countermeasures were shown to vary significantly based on: 1) whether subjects had guilty knowledge, 2) received stimuli or questions that generated stress and 3) had a good understanding of the capability of the detection system.

One of the surprising results of the study was that a relatively high percentage (14.5 percent) of subjects who had no guilty knowledge and no reason to trick the detection system admitted they still instinctively used some behavioral countermeasures.

More in line with the expectations of the researchers, participants with guilty knowledge were 2.6 times more likely to use countermeasures than those without guilty knowledge. Participants exposed to relevant stimuli or questions were 2.9 times more likely to use countermeasures, and those with knowledge of the detection system capabilities were 3.3 times more likely to rely on behavioral countermeasures.

Behavioral countermeasures are responses intended to circumvent a detection system and can represent a wide range of human activity. For the purposes of this study, manipulation was confined to types of eye behavior, speech patterns and emotion control when subjects were required to view and respond to adversarial detection efforts.

Among the visual countermeasures listed, subjects with guilty knowledge relied most (22 percent) on consistent viewing of assorted images (left to right, up to down), but not as much (5 percent) on equal time for each image. Whereas subjects with no guilty knowledge were highest (28 percent) on allowing equal time for each image and none reported trying to blur their vision, as did some of the guilty knowledge subjects.

Vocal countermeasures were also uniquely different between subjects. Unlike the other subjects, guilty knowledge subjects attempted to match vocal response latency for each response (19 percent), control their emotional states (14 percent), and some even inflicted pain on themselves (6 percent) to try to distort detection efforts. A full list of the countermeasures and tested results can be found in the research paper.

The more these automated adversarial systems are used for detection, the more we need to understand the most common behavioral responses that fraudsters might exhibit to try to circumvent detection efforts.

“Deciphering the riddle of human deceit detection: groups comprising a higher number of anxious people are better at distinguishing lies from truths” (2016) by Tsachi Ein-Dor, Adi Perry-Paldi, Tal Daniely, Karin Zohar-Cohen and Gilad Hirschberger

This fascinating study on the power of groups to detect lies provides a wealth of insight on related group studies including some of the ways that significantly improve deceit detection.

The study reviewed research over the past decade that addresses the wisdom of relying on juries to assess credibility. Consistent with most of the prior research, this study also validated that small groups achieve “higher hit rates and lower false alarm rates than individuals in detecting interpersonal deceit.” So, when in doubt try forming a small group to improve your detection efforts.

Beyond the well-documented strength in numbers for detecting deceit, the researchers discovered that group performance was highly impacted by the number of group members high in attachment-related anxiety.

The study found that people with high levels of this type of anxiety “possessed well-organized schemas which allowed their cognitive system to be constantly vigilant to possible dangers, to respond quickly to signs of threats, and to act effectively to diffuse a threat.”

These group members were especially attentive to subtle signs of dishonesty that emerged from their anxiety related to separation and abandonment. They also showed the ability to distinguish lies from truths without increasing the tendency for increasing false positives.

While high levels of anxiety on an individual level seem maladaptive, the researchers found that groups with greater numbers of anxious members detected more lies than did the groups without high anxiety members at statistically significant levels.

More comparisons on lie detection rates between groups and individuals were tested throughout the study. The additional insights from this 11-page research study can be found online.

“Detecting Fraudulent Behavior on Crowdfunding Platforms: The Role of Linguistic and Content-Based Cues in Static and Dynamic Contexts” (2016) by Michael Siering, Jascha-Alexander Koch, and Amit V. Deokar

Researchers observed that crowdfunding platforms have surged in recent years as easy ways to collect seed capital for project ideas. They noticed that Kickstarter, the leading platform, had attracted more than 250,000 projects with total funding of $1.9 billion through 2015. However, 65 percent of the projects fail to receive the target funding and even those that become fully funded sometimes fail to deliver the promised project outcomes to the contributors.

Kickstarter and other similar platforms have set up integrity teams to identify fraud and remove projects that might pose a high risk of deception to potential contributors. The researchers used the Kickstarter data set of suspended projects to analyze characteristics that differed from the population of the non-suspended projects.

As technology advances and the means to evaluate deception through sensors and systems improve, these researchers evaluated how people with guilty knowledge adjust their responses to adversarial detection systems.
Among the baseline descriptive variables analyzed — such as project duration, pledging amount, offer of rewards, and reward limits — statistically significant differences emerged.

Suspended projects had significantly longer durations on line (average 37.7 days) than non-suspended (average 29.7 days), and typically the suspended projects were seeking lower average funding goals (average $21,638) than ongoing projects (average $31,795).

The researchers also looked at characteristics of the project founders of the suspended projects and found significant differences between the numbers of networked friends they had as well as the length of time they’d been active on the social networking website.

Suspended project founders had on average 243 networked friends compared to 355 networked friends for the population of non-suspended project founders. Additionally, suspended founders had a much shorter average time active on the social platform (138 days) compared to the population of non-suspended founders (261 days).

The researchers went well beyond baseline characteristics and also analyzed linguistic variables of affect, language complexity, diversity, word length, expressivity, self-references, verb quantity, specificity and errors, to name a few.

The language tests took into consideration both static online references and websites as well as dynamic formats such as discussion board threads, updates and FAQ responses. Running a series of content and format tests of language relating to each project resulted in highly accurate fraud detection rates. The study built on previous linguistic tests that rely on static or dynamic communication alone.

This research found that by combining tests of content, static and dynamic communication, their best performing classifier was able to detect fraudulent projects with an accuracy of nearly 80 percent.

The research application extends beyond the Kickstarter platform and provides the means for accurate fraud detection across a wide range of social networking platforms.

“Does Training Improve the Detection of Deception? A Meta-Analysis” (2016) by Valerie Hauch, Siegfried L. Sporer, Stephen W. Michael, and Christian A. Meissner

This meta-research study examined the results from 30 different studies published between 1981 and 2011 on whether training improves detection of deception. Conditioned on a number of different variables, the researchers found a small to medium improvement for detection accuracy and lie accuracy but not for truth accuracy.

The training formats from all 30 studies were categorized into three types of training that focused on 1) nonverbal and paraverbal cues, 2) verbal content cues and 3) feedback. Training intensity was accounted for by measuring duration, presentation medium, number of practice examples, group size and trainer presence.

Some of the meaningful outcomes of the research identified useful attributes about training methodology. For example, duration of training was an important factor. All of the studies found that the short training category (five to 20 minutes) had no significant impact in improving the detection results of the trainees. The medium (21 to 60 minutes) and long (61 to 180 minutes) training sessions both yielded significant improvement results.

The medium used in training contributed significantly toward improving training while varying the number of examples used wasn’t significant. For example, the training programs that used written instructions or the combination of written instructions with a lecture or video had significantly higher training effects than those that relied on only lectures or videos.

The impact and effects of many different training formats and methods were assessed throughout the paper. Ultimately, the training with the most promising improvements for detection accuracy across all the studies was found to be training on verbal content cues.

Trainees’ detection ability was found to improve most significantly by focusing on verbal content rather than on heuristic cues like nonverbal behavior. Combinations of training formats were also analyzed but improvement was only found to be significant when verbal content training was incorporated.

Additional details and limitations discovered by the researchers while evaluating the 30 training studies are described in the paper. This study provides an excellent reference for planning an effective training program with a great deal of useful application for instructors. It might be especially helpful to consider these results of detection training from more than 30 years of research combined into one paper.

Learning from the researchers

If you’re an academic, you’re probably fascinated with the results of these research papers. If you’re a practitioner, don’t miss out on the practical points that you can apply today. Researchers are our friends!

Jeff Henning, Ph.D., CFE, CPM, is director of Henning Financial Services. His email address is: jd@henningfs.com. 

Begin Your Free 30-Day Trial

Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.