Search
Close this search box.

Written by: Elie Khoury

VP Research

Truth in the age of AI

With generative AI, bad actors can spoof audio and video of anyone—especially of global leaders. As deepfakes rise, we at Pindrop have been focused on answering the question: is this human or AI? 

When an online post has indicators of authenticity, like credible account activity, it can be nearly impossible to tell what’s real and what’s not. A recent deepfake of Elon Musk appears to be a straight-forward cryptocurrency scam, but the aftereffects demonstrate the complexities of deception and forces us to evaluate the importance of information validity. We aim to help solve that problem—and get to the truth—using Pindrop Pulse, our audio deepfake detection technology.

How it started

On Tuesday, July 23, 2024 at 10:30 pm ET, members of Pindrop’s research team discovered what appeared to be a live stream of Elon Musk on YouTube. We quickly determined that the live stream was actually a 6-minute 42-second AI-generated audio loop that mimics Elon Musk’s voice, discusses current U.S. politics and the 2024 election, and its potential effects on the future of cryptocurrency. The deepfake then urges the audience to scan a QR code, go to a “secure site,” and “effortlessly double” their cryptocurrency. 

At one point, the stream attracted 140K viewers and was live for at least 17 hours.  

Why did the scam appear credible?

We’re used to looking for clues that signal authenticity in the media we consume, like verification badges, account activity, and more. But those signals are easily spoofed and can’t always be trusted. Here’s how this scam made itself appear real: 

  • Account legitimacy: The fraudulent account had a complete profile with a verification badge, 162K subscribers, over 34M total views, and was active since 2022. The account closely resembled Tesla’s official account, so at first glance, viewers may have struggled to spot that the account was fraudulent. 
  • Reputable speaker: By choosing Elon Musk, a vocal leader in the cryptocurrency space, the fraudsters added a sense of legitimacy to their scam–helping them better trick viewers. 
  • Staying close to the truth: The statements in the video are similar to previous remarks made by Musk. By repeating or slightly adjusting Musk’s previous remarks, the video likely raised fewer red flags for viewers. 

Leveraging liveness detection to spot the deepfake

Our audio deepfake detection technology, Pindrop Pulse, generated a segment-by-segment breakdown of a portion of the audio, with real-time results every 4 seconds. Pulse detected segments of synthetic voice in the audio—and concluded that it was likely a deepfake. We reached out to YouTube to report the video and, as of July 24, 2024 at 1:30 pm ET, the account had been taken down.

Graph showing the detection of synthetic voice in an audio clip.

With our new source attribution technology, Pulse was also able to identify ElevenLabs as the voice cloning vendor that was used to create the deepfake, as we had done previously with the President Biden Robocall incident earlier this year. We reached out to ElevenLabs for them to investigate further.

Defending against deepfakes

Generative AI is powerful—and can be a force for good—but it can also be weaponized to deceive us. When our senses aren’t enough to validate the truth, we need to turn to technology that can assist us. Pindrop Pulse, our advanced audio deepfake detection technology, integrates liveness detection to help distinguish between live and synthetic audio. This technology empowers you with information to assess if what you’re hearing is real—and helps bring trust to the forefront of the media we consume.

More
Blogs