The grand scheme of things
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
In May 1925, a French newspaper published an article highlighting the deteriorating state of the Eiffel Tower. Only three decades after its construction, the historic Paris landmark required significant repairs and maintenance. The article suggested that the French government might find it more reasonable to dismantle the tower rather than undertake costly renovations.
With dollar signs in his eyes, Victor Lustig, a career con artist, set out to exploit this information for his own gain. Lustig acquired forged government stationery to impersonate a government official with the authority to sell the Eiffel Tower. Through social engineering and natural charisma, Lustig successfully deceived the top five iron salvage companies in Paris into believing he was a legitimate government official. This elaborate scheme led to multiple bids, with one dealer, André Poisson, paying Lustig a substantial bribe to secure the contract.
Lustig subsequently fled the country with his ill-gotten gains. He speculated (correctly) that Poisson would be too embarrassed to go to the authorities to report his crime. Emboldened by his successful con, Lustig later returned to Paris and succeeded in selling the Eiffel Tower a second time. He made no third attempt, however, as this time the victim alerted the authorities. Lustig then fled to the U.S. to escape prosecution.
Lustig’s notorious sales of the Eiffel Tower represent a colorful anecdote and a historical marker for impersonation fraud schemes. Although technology has advanced dramatically since the early 20th century, modern schemes continue to exploit trust via fabricated credibility. Today’s crypto and deepfake scams are more scalable and harder to detect. Fraudsters use technology to prey upon people with promises of doubling their money or pose as a trusted public figure to deceive them into divulging sensitive information.
Data from the U.S. Federal Trade Commission (FTC) indicates impersonation scams were among the top fraud categories in 2024, accounting for nearly $3 billion in losses. These scams often involved fraudsters posing as trusted entities, including government agencies, banks and well-known businesses to deceive individuals into transferring money or revealing sensitive information. Investment scams accounted for $5.7 billion in losses in 2024, according to the FTC. Of those who reported an investment-related scam, 79% lost money. Scams conducted via social media cause larger losses than other contact methods, totaling $1.9 billion in 2024, the FTC reported. When contacted by a fraudster on a social media platform, 70% of people reported a loss.
As society becomes increasingly reliant on digital platforms for communication and commerce, the urgency to establish safeguards has never been greater. The evolution of impersonation and investment fraud isn’t just a story of clever criminals; it’s a call for institutions, regulators and individuals to adapt faster than fraudsters do. This article provides recent examples of impersonation and investment schemes, explains how social media propels them, examines the role of AI-generated deepfakes in these schemes and offers strategies that organizations can implement to combat deepfakes.
Using software that spots automated patterns, social media networks can now easily block bots but struggle with manually created impostor accounts. Social media bots are automated software tools that interact on social platforms. They operate either partially or fully autonomously and are often programmed to imitate human behavior. Business models, anonymity policies and reliance on user reports make it difficult to eliminate all fake accounts, especially those that look authentic and spread harmful content. The New York Times reported in 2020 that Facebook estimated it had 90 million fake accounts (5% of its profiles) at the time. Accounts purporting to belong to public figures were a particular problem for the social media behemoth. Twitter (now X) found it particularly challenging to identify impostor accounts due to its policy allowing parody accounts, which had to be clearly labeled.
In November 2022, a new subscription service launched called Twitter Blue, which allowed users to purchase blue check mark verification for a monthly fee of $8. Blue check marks distinguished verified accounts of public figures or organizations from impersonators. This policy change allowed anyone with $8 to purchase a digital seal of authenticity that would bolster their account’s posts and online activity with a hallmark of legitimacy.
Elon Musk, CEO of X, claimed the new system would help to ensure fairness between everyday users and those who previously held check marks, such as celebrities, companies, politicians and journalists. But the policy change instead led to a surge in impersonations. Accounts that were once clearly fake could buy verification, giving them the appearance of the legitimate individuals or organizations they were impersonating.
Fake accounts inundated X following the launch of Twitter Blue. Numerous bad actors purchased blue check marks and impersonated public figures, brands and institutions. An X account and handle designed to look like Eli Lilly, a U.S. pharmaceutical company, posted, “We are excited to announce insulin is free now.” The bogus post resulted in Eli Lilly’s stock falling 4.37%, wiping out more than $15 billion in market capitalization. The company later apologized for the fake post and clarified that its life-saving medication isn’t free.
Military defense and aerospace manufacturer Lockheed Martin fell prey to the fallout from Twitter Blue’s launch, as well. “We will begin halting all weapons sales to Saudi Arabia, Israel, and the United States until further investigation into their record of human rights abuses,” posted an account with the username @LockheedMartini. The corporation’s share price dropped by about 5.5%, and its market capitalization dropped by more than $7 billion.
Another fake Twitter Blue account claiming to represent the paramilitary group fighting for control of Sudan spread dangerous misinformation about the conflict in 2023. The post from the fake @RSFSudann account claiming to represent the Rapid Support Forces (RSF) posted that its leader, Mohamed Hamdan Dagalo (also known as Hemedti), had died from injuries sustained in combat. The legitimate RSF account @RSFSudan didn’t have a check mark, but the post gained around 1 million views before removal. The dissemination of misinformation sowed confusion and affected civilians relying on accurate information during the Sudan crisis.
Following the chaos caused by fake accounts flooding the platform, Twitter Blue was rebranded as X Premium, a subscription service offering a blue check mark (after eligibility review), in April 2023. X Premium includes features such as longer posts, fewer ads and post editing. The check mark now indicates a paying subscriber rather than a verified public figure, and legacy accounts lost their verification unless they subscribed.
Companies should prioritize reducing fraud, but some social media platforms have found that fraud can have a positive effect on their bottom line. Meta’s internal documents show that fraudulent advertising boosts its revenue. The owner of Facebook and Instagram’s internal company documents projected in 2024 that it would earn about $16 billion, or 10%, of its overall annual revenue from running advertising for scams and banned goods. Meta’s U.S. Securities and Exchange Commission (SEC) disclosures from 2025 “state that efforts to address illicit advertising adversely affect our revenue, and we expect that the continued enhancement of such efforts will have an impact on our revenue in the future, which may be material,” as reported by Reuters.
Technology enables bad actors to operate with greater speed, reach and sophistication. In November 2025, a Reuters reporter tested Meta’s ad approval and enforcement system by attempting to run cryptocurrency advertisements offering a 10% weekly return, or a 14,000% annual return. Meta approved the ads, which more than 20,000 users across the U.S., Europe, India and Brazil viewed. Although the reporter didn’t accept funds from any users, Meta’s artificial intelligence (AI) tools recommended specific enhancements to the fraudulent ad copy to improve engagement.
The failure of social media platforms to prevent impersonation after monetizing verification demonstrates that verification systems must clearly signal authenticity and accountability rather than serve as revenue tools. Otherwise, they risk enabling misinformation and abuse.
By exploiting the reach, credibility and targeting capabilities of social media, fraudsters can promote fictitious investment opportunities and manipulate investor trust. The media and online influencers widely portrayed the rise of bitcoin and other cryptocurrencies as a fast track to generational wealth with minimal effort. This narrative fueled a Gold Rush mentality in which investors hoped to replicate that success by jumping into the next big token. These tokens often promised massive returns at low entry costs, luring buyers with the hope of turning a few dollars into life-changing wealth. Celebrities and respected public figures often backed or promoted these projects by giving them an air of legitimacy that masked their often speculative or fraudulent nature.
Affinity fraud is one example of a coordinated investment fraud driven by social media. In this type of scheme, fraudsters target members of specific groups, such as religious or ethnic communities, exploiting trust and shared affiliations to promote fraudulent investment opportunities. U.S. District Judge Frederic Block sentenced Jebara Igbara, also known as “Jay Mazini” on Instagram, to 84 months in prison for wire fraud, wire fraud conspiracy and money laundering in 2024. Igbara perpetrated overlapping affinity fraud schemes from 2019 to 2021 that resulted in investors losing at least $8 million. Igbara cultivated a social media image of wealth and piety, presenting himself as a successful investor and devout Muslim. On Instagram, he amassed nearly 1 million followers and flaunted his supposed generosity with videos of himself handing out cash to grocery shoppers, fast-food workers and a woman at an airport who’d lost her purse.
Igbara operated an investment fraud scheme through Halal Capital LLC, targeting Muslim-American investors in New York. He claimed they were investing in stocks, electronics resale and personal protective equipment sales. Instead, he ran a Ponzi scheme and used most of the investors’ money for personal expenses, luxury cars and gambling. To pay “returns” and keep investors engaged in his scheme, Igbara launched a second scam on Instagram and other platforms, offering above-market prices for cryptocurrency. He sent victims fake wire transfer confirmations to appear legitimate, never paid for the cryptocurrency and stole the assets.
At their core, affinity scams rely on the reputation of trusted advisers or individuals to make victims feel secure in their financial decisions. Securities require timely information to make relevant decisions on investments, whether to buy, sell or hold. Investors place a large degree of trust in their sources of information.
Social media has also become a major driver of cryptocurrency schemes in recent years. Fraudsters impersonate new or established businesses offering fraudulent crypto coins or tokens. They’ll say the company is entering the cryptocurrency market by issuing their own coin or token. They might create social media ads, news articles or a website to support their claims and trick people into buying.
In December 2025, the SEC filed charges against three purported crypto-asset trading platforms and four investment clubs, alleging they’d orchestrated an investment confidence scheme that defrauded retail investors out of more than $14 million. AI Wealth, Lane Wealth, AIIEF and Zenith allegedly ran “investment clubs” via the WhatsApp messaging platform, promoting them through social media ads. These clubs used supposed AI-generated tips to gain investors’ trust, then directed investors to fund accounts on fake crypto trading platforms — Morocoin, Berge and Cirkor — which falsely claimed to possess government licenses. The platforms offered bogus “Security Token Offerings” tied to nonexistent companies. No trading occurred, and when investors tried to withdraw funds, defendants demanded advance fees. They allegedly stole at least $14 million from U.S. retail investors, funneling the money overseas through layered bank accounts and crypto wallets. The SEC is seeking injunctions, civil penalties and disgorgement with interest.
Pakistan’s National Cyber Crime Investigation Agency dismantled an “international cartel” behind online investment fraud worth about $60 million. Coordinated raids in Karachi, Pakistan, led to the arrest of 15 foreign nationals and 19 Pakistani citizens. According to Sindh Home Minister Zia-ul-Hassan Lanjar, suspects used social media and messaging apps to lure victims in Pakistan and abroad into private groups promoting high-return trading opportunities. Over weeks, they posed as expert traders, then directed victims to fake platforms showing fabricated profits to build trust and encourage larger deposits.
When investments neared $5,000, victims were hit with extra charges for taxes or withdrawal fees. After payments, access was blocked and communication ceased. The scheme relied on international SIM cards to manage multiple accounts and obscure locations. Tracing funds was difficult because they were layered through local bank accounts, converted to cryptocurrency and moved across borders. Investigations continue as authorities track wallets and exchanges.
Common sense may dictate that pop music icon Taylor Swift wouldn’t be giving out free Le Creuset cookware on Facebook and TikTok, but her AI-generated deepfake fooled many of her fans. A deepfake is video or audio generated with AI that replicates someone’s voice, image and movements to perpetrate fraud. Within a single week in the fall of 2023, actor Tom Hanks, journalist Gayle King and YouTube personality MrBeast reported AI-generated versions of themselves featured in misleading promotions for dental plans, joint health supplements for pets and iPhone giveaways, respectively. According to the celebrities, none of them gave permission for their likenesses to be used.
These aren’t isolated incidents. The proliferation of AI tools at cyber criminals’ disposal has helped to propel impersonation scams, which surged 148%, with deepfake audio and video calls tricking victims into authorizing fraudulent transfers, according to a 2025 Identity Theft Resource Center report. Real-time deepfakes harvest publicly available videos, images and sound clips of individuals to create a digital clone of themselves for nefarious purposes.
In March 2025, a deepfake video impersonating financial analyst Michael Hewson surfaced on Facebook. The AI-generated video falsely depicted Hewson promoting a WhatsApp group that promised to double investments rapidly. The realistic nature of the deepfake led many to believe its authenticity, resulting in financial losses for those who followed the fraudulent investment advice. Hewson publicly disavowed the video, emphasizing that he doesn’t offer investment advice through WhatsApp or other messaging services.
Fraudsters use deepfake technology to prey on vulnerable individuals. In 2025, a recently divorced bitcoin investor lost his entire retirement savings worth one full bitcoin after being ensnared in an AI-powered “pig butchering” romance scam. Pig butchering refers to an investment scam in which fraudsters gain the trust of victims over time and then deceive them into investing in fake crypto assets or another fraudulent investment opportunity. In this case, the scammer used generative AI to craft a convincing persona, including a synthetic portrait and a female trader identity. They conducted live deepfake video calls, overlaying a fabricated face onto their actual body in real time, with accurate lip-syncing and natural lighting, effectively simulating a real romantic partner.
Over several weeks, the fraudster built an emotional connection with the victim, making false promises to double his bitcoin holdings, preying on his financial ambitions and emotional vulnerability. When the victim transferred the bitcoins to the scammer’s wallet, the funds were immediately unrecoverable. AI-enhanced social engineering, through deepfakes and synthetic identities, can facilitate highly convincing and devastating financial scams in the cryptocurrency space.
Combating the growing threat of deepfakes requires a multifaceted approach that blends policy, technology and personal safeguards. On an individual level, families and close contacts can establish private “safe words” from details or personal anecdotes not available online to verify identities during suspicious interactions.
The National Cybersecurity Alliance has a four-step process for creating a safe word.
At the government level, the European Union is at the forefront of legislative efforts to combat the illicit and misleading use of deepfakes. Article 35 of the EU Digital Services Act requires the largest online platforms and search engines to implement “reasonable, proportionate and effective mitigation measures” that proactively identify deepfakes distributed on their platforms. Complementing this framework, Article 50 of the EU Artificial Intelligence Act imposes disclosure obligations on all deployers of AI systems, mandating that AI-generated or manipulated image, video or audio content be clearly identified as such.
Individuals intent on committing deepfake-related fraud schemes are unlikely to voluntarily comply with these disclosure requirements, but the legislation meaningfully shifts responsibility to the platforms that host and disseminate such content. The EU has established a framework of accountability that compels major online platforms to take active steps to mitigate the spread of deceptive synthetic media.
In the U.S., federal enforcement is evolving from general statutes to targeted laws. Current priorities are political interference, sexual exploitation and fraud. There’s no single federal law outlawing all deepfakes, but prosecutors apply existing laws — wire fraud, identity theft, cyberstalking and extortion — when synthetic media is used to commit fraud. On the national level, the TAKE IT DOWN Act took effect in 2025. The act “criminalizes the publication of nonconsensual intimate visual depictions. It also requires covered platforms to: (1) create a process for consumers to notify covered platforms of a nonconsensual intimate visual depiction on the platform; and (2) remove such depictions within 48 hours of receiving notice.” Platforms must remove flagged material or face penalties. Victims don’t need to prove harm. Creation or distribution alone is enough.
Additionally, the National Defense Authorization Act requires defense agencies to assess deepfake risks in military and foreign influence contexts. Deepfakes are now treated as a national security threat alongside cyberattacks and propaganda. The U.S. Federal Trade Commission Act allows regulators to penalize businesses using deepfakes for deceptive ads, fake endorsements or undisclosed AI-generated content. Violations can lead to fines and corrective actions.
Aside from federal policy, research into technological tools to combat deepfakes is crucial. Researchers are exploring embedding security into content from the start, making it harder for deepfakes to cause damage. Instead of looking for signs of tampering in images or videos after they appear, the future of detection and defense strategies will take a proactive approach. One technique being studied is digital watermarking, the practice of embedding invisible markers, such as a code or images, into media before it’s shared. This makes it possible to verify authenticity in real time, trace the source, prevent tampering and even provide legal evidence.
A January 2025 paper published in the journal Computers, Materials & Continua states that watermarking offers four big advantages:
The paper also highlights research gaps, such as the need for cross-domain watermarking (ensuring that watermarks are detectable across different media types) and adaptive strategies that evolve with new threats.
A study published in March 2025 examined whether multimodal large language models (MLLM) work as deepfake detectors. Researchers tested 12 of the newest MLLMs for detecting deepfake images. They found that the top performers were just as good as or better than traditional image-based detectors, especially on unfamiliar types of deepfakes, with no extra training (zero-shot). Having the newest version of a model or advanced reasoning features didn’t significantly boost performance in this task. What really helped was model size: the larger the model, the better it did in spotting fakes. Some of the multimodal models were able to detect deepfakes they’d never seen before, not just the datasets they were trained on.
MLLMs have practical implications for fraud examiners based on the study, including:
Embracing integrated, proactive strategies that combine embedded security with advanced AI-driven analysis may help detect and defend against deepfakes.
Synthetic audio and video are becoming increasingly indistinguishable from authentic communications. As a result, employee anti-fraud training that relies on spotting visual or auditory “tells” such as blurry facial features, unnatural lighting and poor lip synchronization will become obsolete. In practice, most deepfake-enabled fraud schemes exhibit the same behavioral red flags. These include sudden urgency, desire for secrecy, attempting to bypass established processes and unusual timing.
Updated training programs should instead emphasize contextual analysis and critical thinking, reinforced by a robust system of internal controls. Employees should be trained not merely to assess how a request appears, but to question why the request is being made, whether it aligns with established practices and what safeguards are required before acting. A prudent manager wouldn’t challenge an employee for seeking secondary confirmation when company assets, sensitive data or reputational risk are at stake. On the contrary, such behavior should be explicitly encouraged and reinforced.
To implement this shift in mindset, organizations should equip employees with a simple, repeatable decision-making framework. One effective approach is the STOP framework, which provides clear guidance in moments of uncertainty.
In the battle against deepfakes, a collaborative defense that entails coordinated deepfake attack simulations across departments to identify vulnerabilities and improve responses is the best offense. The drill could entail employees receiving a mock email from the human resources department or a fake video call from the CEO. Cybersecurity specialists should lead strong training and awareness programs that address emerging AI-driven threats and teach effective strategies to mitigate these risks.
From Lustig’s audacious Eiffel Tower cons to today’s AI-powered impersonation schemes, fraud has always fed off trust and perceived legitimacy. The scale and speed with which fraud proliferates combined with the availability of social media and generative technologies mean fraudsters can now easily manipulate markets, spread misinformation and exploit individuals. As verification and detection tools lag behind innovation, the responsibility to implement stronger safeguards and educate users falls on platforms, regulators and organizations. Combating modern fraud isn’t solely about catching criminals; it’s about staying ahead of them in a digital world where authenticity is under constant attack.
Alexander Dokuchaev, CFE, CPA/ABV, is a senior manager in the forensic accounting and transaction advisory team at Solutions Group Accounting Firm. Contact him on LinkedIn at linkedin.com/in/alexander-dokuchaev.
Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
Read Time: 7 mins
Written By:
Patricia A. Johnson, MBA, CFE, CPA
Read Time: 12 mins
Written By:
Roger W. Stone, CFE
Read Time: 6 mins
Written By:
Felicia Riney, D.B.A.
Read Time: 7 mins
Written By:
Patricia A. Johnson, MBA, CFE, CPA
Read Time: 12 mins
Written By:
Roger W. Stone, CFE