ACFE Insights Blog

Stolen Voices: The Dark Side of AI

Artificial intelligence (AI) is becoming an increasing problem not just for hackers and fraudsters, but also for identity theft. Recently, there have been an influx of deepfakes of celebrities, but not just their faces—their voices.

By Abbie Staiger June 2024 Duration: 5-minute read
Please sign in to save this to your favorites.
Artificial intelligence (AI) is becoming an increasing problem not just for hackers and fraudsters, but also for identity theft. Recently, there have been an influx of deepfakes of celebrities, but not just their faces—their voices. 

It is a big trend on social media platforms, such as Instagram and TikTok, to post AI-generated singing using an artist's voice to predict what one of their unreleased songs might sound like. For example, AI was capable of replicating Taylor Swift and Post Malone’s voices to sing a fans prediction of what their collaborative unreleased song, “Fortnight,” would sound like. While it all seems playful and harmless, there is a dark side to this AI capability that is creeping up on actors' and singers' jobs. 

What is AI Voice Theft/Deepfake? 

AI voice theft, often referred to as deepfake audio, involves using artificial intelligence to replicate someone’s voice with astonishing accuracy. This technology employs sophisticated algorithms and neural networks to analyze and mimic the speech patterns, tone and cadence of a target individual. While deepfakes started with visual content, such as fake videos of celebrities or politicians, the advancement in audio deepfakes is now a pressing concern. 

This technology can be used for various purposes, ranging from creating entertaining content to committing fraud and identity theft. The misuse of AI-generated voices can lead to severe implications, including unauthorized financial transactions, manipulation and damage to personal and professional reputations. 

How is this affecting the movie and television industry? 

The movie and television industry are grappling with the impact of AI voice theft. Voice actors and dubbing artists are particularly vulnerable, as AI can now replicate their voices, potentially reducing the need for their services. This technology allows producers to generate dialogue for characters without requiring the actor to record it. Consequently, actors fear losing their jobs and the unique creative control they have over their performances. 

Moreover, the quality and emotional depth provided by human voice actors cannot be easily replicated by AI. Despite this, the cost-effectiveness and convenience of AI-generated voices present a significant temptation for producers, leading to ethical and economic dilemmas within the industry.  

Actors and AI: 

Several high-profile actors have become involuntary participants in AI voice theft scenarios: 

Scarlett Johansson 

Scarlett Johansson’s legal team is demanding transparency from OpenAI regarding the development of an AI voice assistant named “Sky,” which Johansson claims sounds eerily similar to her own voice. This issue surfaced after OpenAI’s live demonstration of the voice, leading many to draw comparisons to Johansson’s character in the 2013 film “Her.” Johansson revealed that OpenAI CEO Sam Altman had previously approached her about licensing her voice for the assistant, but she declined that offer. Johansson then went on to state that she was “shocked, angered and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference.” 

Keanu Reeves 

Keanu Reeves is actively fighting against AI-generated fraud through a notable partnership with HomeEquity Bank, featuring in a series of deepfake videos designed to raise public awareness about the dangers of deepfakes. This inventive campaign uses Reeves' recognizable face to highlight the importance of critically examining digital content and remaining vigilant against fraudsters leveraging advanced AI tools. Additionally, Reeves has taken personal measures to combat the misuse of AI, including a clause in his film contracts that prohibits the use of AI during any stage of production. Reflecting on the issue, Reeves remarked, "When you give a performance in a film, you know you’re going to be edited, but you’re participating in that. If you go into deepfake land, it has none of your points of view. That’s scary."  

Bad Bunny 

Bad Bunny has expressed his outrage over a viral TikTok song, "NostalgIA," which uses AI to replicate his voice alongside fake vocals from Justin Bieber and Daddy Yankee. In a WhatsApp post to his 19 million followers, he vehemently condemned the track, stating that anyone who liked the song should leave his group chat and adding, "You don't deserve to be my friends. I don't want them on the tour either." Bad Bunny used strong language to describe his disdain for the AI-generated song. There is a growing concern among artists about the misuse of AI technology to clone their voices without consent, echoing similar sentiments expressed by Drake and The Weeknd when their voices were cloned for the viral track "Heart On My Sleeve." While some in the industry see potential in AI for music creation, the ethical and legal implications of voice cloning remain contentious. 

SAG-AFTRA Strike: 

Background actors and extras were asked to spend a day being cloned for producers to use an AI version of them in the background instead of hiring them and paying them for the days they would be there working. They were only offered compensation on the day of being cloned and would not be given any royalties for any production they were in. This practice raised so much concern over extras losing their jobs that it led to the SAG-AFTRA strike. 

How does this affect the anti-fraud industry? 

Even though it is still early in the AI evolution, it is crucial to be cautious about AI voice theft and deepfakes in the anti-fraud industry. While more recently, AI has been used to copy actors' and artists' voices, fraudsters could potentially utilize it to copy the voices of CEOs, high-level executives, or even people you know in order to scam you. This kind of AI-driven impersonation could lead to severe security breaches, unauthorized access to sensitive information and financial fraud. 

Organizations and individuals must stay vigilant and adopt advanced verification methods to counteract the potential threats posed by AI-generated voice theft. 

While AI voice technology offers remarkable capabilities, its misuse presents significant ethical, economic and security challenges. Balancing innovation with protection against misuse is essential as society navigates the evolving landscape of artificial intelligence.