pindrop-logo-2.svg
Search
Close this search box.
Search
Close this search box.

Written by: Nick Gaubitch

Director of Research

Sen. Blumenthal made his own deepfake. Pindrop caught it. Could you?

 

As AI technology continues to advance, the rise of deepfakes poses a serious threat. These manipulated images, videos, and audios use artificial intelligence to create convincing but false representations of people and events. Unfortunately, deepfakes are often difficult for the average person to detect. In fact, in a Pindrop study people were able to identify a deepfake with 57% accuracy, only 7% better than a coin toss. While some instances of deepfakes may be lighthearted, like the Pope in the white puffer coat or AI-generated music from Drake and The Weeknd, their existence creates doubt about the authenticity of legitimate evidence. Oftentimes, emerging technologies such as AI chatbots may be used for good causes but it usually does not take long before they trigger the creative minds of the fraudsters. Criminals can take are taking advantage of these disruptive forces to conduct misinformation campaigns, commit fraud, and obstruct justice.

 

In response, deepfake detection technologies must evolve quickly as well. But more importantly, organizations that rely on voice verification need to ensure that they review their security strategy and adopt a defense-in-depth approach. With focus on voice security since its inception, Pindrop has been creating deepfake detection technologies since 2014. Pindrop’s liveness detection provides protection not only for synthetically generated speech (text-to-speech, voicebot, voice modulation, voice conversion), but also for recorded voice replay attacks and modulated voices. The good news is that machines are much better at detecting the fakes compared to humans. 

 

At Pindrop Labs, researchers never cease to improve every part of our fraud detection and authentication suites, where knowing if the speech is produced by a real live speaker is crucial. An important component of this work is to continuously look out for real-world examples where voice-based attacks have been used to validate our detection engines. One recent example that resonated through the media in the past week is Senator Blumenthal’s opening remarks at the Senate hearing on AI. Senator Blumenthal began by speaking with his own voice and eventually switched to a deepfake impersonation of his voice. We used our liveness detection engine to detect the veracity of the voice. In the video below you will see the results of our analysis with a real-time liveness score and the corresponding decision of whether the voice is real or fake (a low score indicates a deepfake while a high score indicates a live voice). What is particularly interesting here is not only the ability to correctly identify where speech is synthesized but also where it is not and to do so in real-time.

 

More
Blogs