
Finding fraud in bankruptcy cases
Read Time: 12 mins
Written By:
Roger W. Stone, CFE
Deepfakes are no longer just amusing YouTube videos. Fraudsters are beginning to use them for business email compromises. Cyber experts say phone scams, sham celebrity endorsements, biometric fakes and bogus evidence could quickly follow. Now’s the time to advise your organizations.
The last-minute swing of an election, widespread civil unrest, intercontinental nuclear weapon launches — these are the most sensational of potential outcomes of “deepfakes.” Computer techs and cybercriminals are using artificial intelligence to manipulate video and audio clips to fabricate reality and prompt visceral responses from target audiences. And now slithery types will increasingly use them to commit fraud.
The first iteration of this technology debuted in a paper published by University of Washington researchers in the summer of 2017. They used machine learning — a subset of artificial intelligence — to combine pre-existing audio and video clips of former President Barack Obama to create a realistic, lip-synced video. (See Lip-syncing Obama: New tools turn audio clips into realistic video, by Jennifer Langston, UW News, July 11, 2017.)
The most popular type of technology used to make sophisticated deepfakes is Generative Adversarial Networks (GANs), in which one artificial intelligence program (the generator) uses machine learning to create manipulated media and another (the discriminator) evaluates that media for authenticity. The two go back and forth until the discriminator evaluates the media created by the generator as authentic. [See A Beginner’s Guide to Generative Adversarial Networks (GANs), by Chris Nicholson, Pathmind.]
The deepfake moniker applied to this phenomenon emerged from origins no less disturbing than the dystopian scenarios the technology evokes. In November 2017, a user on the website Reddit began posting altered pornographic video clips featuring the faces of female celebrities superimposed over adult film actors from an account with the username “deepfakes.” The rest is history … and pseudohistory.
Reddit and other internet platforms banned those clips within months but not before manipulated videos initiated some uneasy conversations about implications of the technology used to create them.
Ryan Duquette, CFE, partner at RSM Canada, says deepfakes remind him of Orson Welles’ 1939 radio production of H.G. Wells’ “War of the Worlds.”
“The radio program caused people to run into the streets panicking,” says Duquette in an interview with Fraud Magazine. “It’s the same kind of thing. It’s manipulating people, but obviously video is much more realistic and convincing.”
Since the Obama lip-syncs, thousands of deepfake video clips have been created and shared online. According to Deeptrace, a company founded in 2018 to provide deepfake detection and monitoring solutions, more than 14,600 deepfake videos existed online as of August 2019, which represents an increase of almost 100 percent over the course of about eight months. (See the report, The State of Deepfakes: Landscape, Threats and Impact, Deeptrace, September 2019.)
Deeptrace’s report also indicates that deepfake pornography accounts for a significant majority (96%) of deepfake videos posted online. A quick search on YouTube using the term “deepfakes” produced mostly entertainment-related results. The top 10 most-viewed videos included three videos based on comedian and actor Bill Hader impersonating other celebrities, one video of actor Keanu Reeves stopping a robbery, one of Obama, two news show clips discussing the phenomenon, and a clip of Terminator 2 if it starred actor Sylvester Stallone instead of Arnold Schwarzenegger. A single user account, “Ctrl Shift Face,” appeared responsible for making, or at least uploading, four of those videos.
Other prominent examples of manipulated media that influenced viewers and drew vocal criticism include two videos that don’t necessarily qualify as deepfakes based on the techniques used by their creators. One video clip appeared to have been slowed down so U.S. House of Representatives Speaker Nancy Pelosi appeared to slur her speech while the other was sped up to show Jim Acosta, CNN’s chief White House correspondent, seemingly attack a White House aide during a press conference. Because these videos didn’t feature any superimposed faces or fabricated audio, or rely on advanced technological tools such as artificial intelligence, they were only “cheap fakes” or “shallow fakes.” Nevertheless, the clips led to questions about Pelosi’s fitness for office and the temporary revocation of Acosta’s White House press pass.
Politicians and celebrities make effective subjects of deepfake videos because large quantities of audio and video recordings of them are available to train the GANs that create quality synthetic media. It’s no surprise that some of the most-viewed deepfakes thus far feature global political figures like Obama or Russian President Vladimir Putin. We now realize the possible serious consequences of deepfakes.
“It could change our government or any government, or start a war. I mean, how much more serious do you get than that?” asks ACFE Faculty member Walt Manning, CFE, founder and president of the Techno-Crime Institute, in an interview with Fraud Magazine.
However, deepfakes involving public figures aren’t limited to video clips. A synthetic audio clip shared in May featured comedian, sports commentator and popular podcaster Joe Rogan discussing his sponsorship of an all-chimpanzee hockey team on a strict diet of bone broth and elk meat. The clip’s creators claimed to have used the Dessa machine learning platform to create the clip by generating it through a text-to-speech synthesis system Dessa engineers built called RealTalk. Luckily, Dessa won’t be open-sourcing (making publicly available) the work. (See RealTalk: This Speech Synthesis Model Our Engineers Built Recreates a Human Voice Perfectly, by Dessa, Medium, May 15, 2019.)
The prospect of fraudsters, who don't necessarily possess advanced technological skills, getting their hands on deepfake technology is practically a foregone conclusion, according to Ruotolo.
The current array of deepfake tools spans a wide range from commercially available face-swapping applications featured prominently in social media photo-sharing to cutting-edge, experimental artificial intelligence developments spearheaded by researchers at prestigious institutions. Like Dessa’s RealTalk, most of the advanced technologies developed by researchers aren’t available to the general public.
James Ruotolo, CFE, senior director of product management and marketing for analytics software firm SAS’s Fraud and Security Intelligence Division, discussed the advancement of the technology behind deepfakes in an interview with Fraud Magazine. “The uptake is going to be very, very fast,” Ruotolo says. “And that creates significant challenges for the legal and investigative communities to keep up and for guidelines about how to deal with that situation — let alone the technology advancements that need to occur in order to address it.”
The existential threats posed by deepfakes became real and grounded in fraud in late August 2019 when a fraud expert at the Euler Hermes Group insurance company disclosed to The Wall Street Journal that a client fell victim in March 2019 to a business email compromise scam incorporating synthetic audio that impersonated the voice of a German CEO. According to the analyst’s account of the incident, the faked audio included an urgent request that the victim transfer about $243,000 to a Hungarian supplier within an hour. The insurance company was the ultimate loser. (See Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case, by Catherine Stupp, The Wall Street Journal, Aug. 30, 2019.)
“Business email compromise is already a rapidly growing threat vector,” Ruotolo says. “Everyone I talk to at conferences and events is very concerned,” he says. “This type of deepfake technology significantly exacerbates that problem.”
Reports of another, much larger business email compromise attack in September 2019 suggested that the fraudsters might have used deepfake technology to spoof the voice of an executive at financial media company Nikkei, which caused a loss of $29 million. However, Nikkei’s statement about the incident didn’t confirm the use of deepfake technology. Regardless, this could portend future frauds. (See GIACT On Payments’ $29M Wire Fraud Wake-Up Call, PYMNTS, Nov. 4, 2019.)
Indeed, media created with deepfake technology that enabled a successful fraud scheme opens the door to any number of enhanced traditional fraud schemes.
“There’s a lot that it’s going to be used for,” says Duquette. “It’s probably already being used for other fraud schemes; we just haven’t heard of many cases yet where people that were investigating it were able to determine that it was fake audio or video being used.”
The prospect of fraudsters, who don’t necessarily possess advanced technical skills, getting their hands on deepfake technology is practically a foregone conclusion, according to Ruotolo. “You can expect, much like we’ve seen in the realm of hacking, there will be toolkits that are created that are very easy, push-button approaches to create these types of things,” he says.
In fact, Deeptrace’s research found numerous downloadable tools or graphical user interfaces for creating deepfakes, and one application for synthetic voice cloning. Although many of them required sophisticated programming knowledge and advanced computer equipment, Deeptrace also found service portals and marketplace services advertising custom deepfake video and audio generation starting at $3.
As tools and services for creating deepfakes become more accessible and affordable, it stands to reason they’ll be applied to other fraud vectors that involve video or audio, including over-the-phone scams already bolstered by phone number spoofing.
“It’s not hard at all to spoof specific phone numbers. Online services will do that,” Manning says. “But I think robocalls soon are going to have deepfake protection to cause even more confusion. Instead of receiving robocall recordings that are obvious fakes, we’re now going to get calls with voices that are more natural and appear to be those we know so we’ll stay on the line.”
Duquette agrees on the implications for already-ubiquitous scam calls becoming enhanced by deepfake technology. “Right now, a lot of the tech-support scams originate in India,” he says. “I foresee those organizations switching to deepfake technology to make themselves seem more legitimate. For example, if they’re calling someone from the Deep South [of the U.S.], they’ll maybe have the accent of a Deep South law officer,” he says.
Many common fraud schemes targeting consumers already rely on manipulation of public figures’ likenesses or images, which will only become more effective through deepfakes. Faked celebrity endorsements based on just images and written words have already caused widespread losses involving cryptocurrency investment ploys and other imposter scams.
Duquette once investigated a celebrity endorsement fraud case. “People acted on the endorsement of pictures only because the celebrities depicted in those pictures had a following of their own,” Duquette says about the case. “The emergence of deepfakes will result in more of that. If you can look at a picture of a celebrity and say, ‘I’m going to go buy a product now because this celebrity is endorsing it,’ you are going to believe that endorsement even more if you now see a video of that celebrity endorsing it.”
Corroboration of video and audio evidence through testimony and documentation could fit the bill of more traditional tactics to defend against claims that media was manipulated.
Fraudsters could also use deepfake technology to generate video clips purporting to show bribery agreements or transactions involving key procurement personnel from organizations. Fraudsters could weaponize deepfake clips of executives making startling announcements about their companies for a variety of fraud schemes involving market manipulation — from pump-and-dump schemes to damaging competitors’ stock prices.
Fraudsters can use synthetically generated audio and video clips impersonating individuals to thwart biometric authentication systems in account takeover attacks.
“The quality of a lot of biometric systems, whether it’s facial recognition or iris scan or fingerprints or voice recognition, is usually not very high — unless it’s a super-classified application — because of the cost,” Manning says.
One recently released study, which focused on two popular facial recognition platforms, found that videos created by GANs fooled the facial recognition systems in up to 95% of the attempts. (See Vulnerability of Face Recognition to Deep Morphing, by Pavel Korshunov and Sebastien Marcel, Idiap Research Institute.)
The impact deepfakes might have on fraud schemes could pale in comparison to the technology’s potential to complicate investigations that rely on any forms of video or audio evidence.
“I think we’re quickly entering a situation where people are not going to be able to verify the authenticity of video, images or audio simply by viewing, looking or listening,” Ruotolo says.
Inability to establish the authenticity of evidence could make court proceedings much more difficult for fraud examiners. “Can you have ‘beyond a reasonable doubt’ when this type of technology could be involved in the case? And at what cost?” Manning asks.
Fraudsters might even try to create — or pay someone else to create — deepfake media that directly contradict audio or video clips submitted to courts to cast doubt on veracity of evidence, which would muddy waters for juries or judges.
How will fraud examiners deal with the prospect of previously unquestioned forms of evidence, such as confessions or other testimony recorded in audio or video format and surveillance footage, no longer carrying the same weight they did before deepfakes?
“We’re going to have to, as fraud examiners and investigators, start relying on old-school methodology a little bit more, where you can’t just rely on a video when it’s not completely 100% verifiable,” Duquette says. “It’s going to be an ongoing challenge to keep up with.” Corroboration of video and audio evidence through testimony and documentation could fit the bill of more traditional tactics to defend against claims that media was manipulated.
Modern problems require modern solutions. According to Ruotolo, that could lead to fraud examiners adding to their skillsets. “I think you’re going to see a whole new trend in the CFE arena of fraud investigators that have to develop a specialty of forensic analysis of audio and video,” he says.
The serious implications of deepfake media for fraud schemes and investigations, not to mention the geopolitical ramifications, demand earnest efforts to develop solutions for consumers and organizations. Deeptrace claims on its website that the company developed the first-to-market deepfake detection solution. Other tech start-ups that announced efforts to combat deepfakes include Faculty and Amber Video, and more companies will likely throw their hat into the ring.
Meanwhile, major technology companies — Facebook, Amazon and Microsoft — have collaborated to launch the Deepfake Detection Challenge, which invites people to use a dataset of manipulated images and video content to develop technologies designed to detect deepfake media. Google also released a similar dataset, named FaceForensics, for developers to use.
“One of the first steps is having some good training data to leverage technology to help detection,” Ruotolo says. “I think that’s definitely going to generate some interest and an open-source approach to improving the tools we have for detection.”
Technological solutions might not become available, or affordable, for organizations in the immediate future. And major internet platforms like YouTube or Facebook incorporating deepfake detection and prevention into their infrastructures might not provide much of a defense against fraud schemes relying on phone or email delivery methods. Even if they could, organizations that rely on Google, YouTube, Facebook and other internet gatekeepers to protect them from fraud might not be robust risk management.
So, what can organizations do? The experts Fraud Magazine interviewed arrived at some consensus.
“Training is going to be one of the most important things we can do that could be cost-effective right now,” Manning says. “It’s important to show this technology to organizations and demonstrate capabilities so they can see how realistic it is and start calling for and establishing policies that require some type of secondary authentication or validation before transfer of funds.”
Duquette echoes Manning’s views. “From a CFE perspective, we are all, for the most part, in a very good position to advise our organizations or make our clients aware of these sorts of threats,” he says, “I think companies need to re-examine their risk management processes. They need to understand that these threats are out there and it’s moving beyond phishing emails. Now it’s morphing into video and audio.”
“I think the best approach for organizations, now and in the long term, is good procedures and good controls,” Ruotolo says. “A really good control doesn’t get thwarted by changes in technology.”
The evolution of technology won’t slow, and organizations must continuously prepare for emerging threats. As disturbing or unsettling as the implications of deepfake technology might seem to fraud examiners, awareness is the first half of the battle, without which they can’t lead efforts to adapt and confront those risks.
Read more: Fraudsters could use deepfakes for stock profiteering, extortion, virtual kidnapping
Mason Wilder, CFE, is a research specialist at the ACFE. Contact him at mwilder@ACFE.com.
In September 2018, Tesla’s stock price plummeted by almost 10% in one day after a podcast featured CEO Elon Musk smoking marijuana with host Joe Rogan.
A Musk tweet the previous month about taking Tesla private resulted in a Securities and Exchange Commission investigation. Consider the impact a deepfake video of Musk saying that Tesla would switch to gas-powered vehicles might have on his company’s stock price. One of the results could be short sellers commissioning the deepfake to turn a hefty profit.
Alternatively, a deepfake video of a well-liked celebrity — say basketball superstar LeBron James — endorsing a little-known company could provide enough of a boost for that company’s fraudster founders to pull off a lucrative pump-and-dump or exit scheme.
A low-quality July 2017 video featuring Austrian politicians discussing a quid pro quo arrangement caused a political crisis after its release in May 2019, including the vice chancellor’s resignation and early parliamentary elections. Extortionists could produce deepfake videos like the real clip in the Austrian case to elicit hush money payments from politicians.
The extortion implications of deepfakes also reach beyond high-profile victims. “I think you’re going to see a rapid increase in extortion scams,” says James Ruotolo, CFE, senior director of product management and marketing for analytics software firm SAS’s Fraud and Security Intelligence Division. “Imagine if fraudsters can take your picture off your LinkedIn or Facebook profile, or family videos on YouTube and plug it into a deepfake technology to place your image in a very compromising video situation. You’re absolutely going to see people that are being setup for those types of scams,” Ruotolo says.
The technology also could bolster “virtual kidnapping” schemes, in which relatives are tricked into sending ransom money to release victims of a non-existent abduction.
Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.
Read Time: 12 mins
Written By:
Roger W. Stone, CFE
Read Time: 10 mins
Written By:
Tom Caulfield, CFE, CIG, CIGI
Sheryl Steckler, CIG, CICI
Read Time: 2 mins
Written By:
Emily Primeaux, CFE
Read Time: 12 mins
Written By:
Roger W. Stone, CFE
Read Time: 10 mins
Written By:
Tom Caulfield, CFE, CIG, CIGI
Sheryl Steckler, CIG, CICI
Read Time: 2 mins
Written By:
Emily Primeaux, CFE