Search
Close this search box.

Written by: Laura Fitzgerald

Head of Brand and Digital Experience

AI-manipulated media can come in multiple forms. Pindrop’s newly released Deepfake and Voice Clone Consumer Sentiment Report (October 2023) categorizes deepfake technology in various ways, from static (i.e., text, printed, written) to dynamic, which can have audio and even video elements.

Regardless of the technology and how it’s delivered, deepfakes are AI-manipulated digital media in text, image, audio, and video formats that replicate something real or alter critical characteristics that risk manipulating how that media is interpreted. As technology rapidly advances, deepfakes pose a risk to consumers and businesses across industries.

The report dives into consumer knowledge of the various subcategories within deepfake technology, highlighting where consumers are most concerned regarding recent trends and advances.

Results showed that 60% of surveyors were highly concerned with deepfakes and voice clones (a subcategory of deepfakes), and over 90% expressed concern.

Some key insights from the report include:

  • Social media drives more exposure — The top four channels for deepfake exposure are social media platforms: YouTube, TikTok, Instagram, and Facebook, with 49.0%, 42.8%, 37.9%, and 36.2% reported encounters, respectively.
  • There is slightly more awareness around voice clones than deepfakes — The term voice clone registered higher consumer awareness than deepfake, at 63.6% to 54.6%, respectively. This may be surprising considering how much higher deepfake registers in Google trends data. However, an explanation may be that the meaning of voice cloning is evident because most people are well aware of cloning, while “deepfake” is just now gaining usage and broader popularity.
  • Awareness is driven by income — Three-quarters of U.S. adults with over $125,000 annual income know about voice clones, and over two-thirds know deepfakes. However, for consumers with income less than $50,000, those figures are just 56.5% and 43.6%, respectively.
  • There is slightly more concern around deepfakes — U.S. adults report they are “extremely” or “very” concerned about the negative impacts of deepfakes at a higher rate than voice clones. The total for deepfakes is 60.4% and 57.9% for voice clones. The critical difference is in the “extremely” category, where deepfake outpaced voice clones by three percentage points.

Let’s dive into some areas that we found interesting in the report.

1. The Growing Concern in Men and Women Varied

More men than women reported awareness of deepfake technology – over 66% of men compared to 42% of women. Men are also 50% more likely to have created a deepfake than women. This could have to do with how they are consuming information.

While men were more likely to have encountered a deepfake on YouTube (52.4% to 46.7%), women had the edge over men on TikTok (43.1% to 38.4%). Men also report significantly more concern than women about deepfakes and voice clones causing adverse impacts. And the gap grew wider for voice clones, with 60.8% for men and 54.5% for women.

2. Pop Culture Plays a Role in AI Technology Sentiment

The polarization of deepfake sentiment varies across media types. Some entertainment consumers raise positive views of evolving AI technology. For example, the report showed that viewers of The Andy Warhol Diaries documentary held a 71% positive sentiment towards using deepfakes in the film.

Viewers of the most recent Indiana Jones movies also reported a 56.7% positive sentiment. This analysis suggests that viewers of those media saw tangible value in the technology.

What consumers appreciate other than the creativity (39.4%) of deepfakes is that they are funny or entertaining (40%). But that comes with a paradox of being concerned with the risks. Over 90% of consumers are concerned about the threat of deepfakes, though 40% expressed positive sentiment toward generative AI technology.

3. Concerns Vary By Industry and Income

67.5% of U.S. adults expressed concern about the risks posed by deepfakes and voice clones related to banking. Politics, governments (54.9%) followed that, then media and healthcare (50.1%). One reason for the variation in concern could be attributed to income.

Consumers with less than a $50,000 annual salary were the least likely to express concern in every category. This trend excludes the insurance sector, where the problem – regardless of income – was nearly equal to the $125,000 salary group.

4. There are Split Views Around Media and Social Media Readiness

Consumers need more confidence that news media organizations are taking proactive measures to combat deepfakes. While those surveyed (40%) reportedly feel optimistic that banks, insurance & healthcare providers have taken steps to combat deepfake fraud, they have lower confidence in the news and social media sectors.

These results are significant because news and media platforms, especially YouTube and TikTok, drive the highest number of deepfake encounters.

Final Thoughts: Deepfake, Voice Clone, and Consumer Sentiment Report 2023

The concern around deepfakes and voice-cloned media continues to grow as technology evolves. As they build awareness strongly driven by income and industry, companies must build trust in their customer base by developing action plans with holistic security tactics such as multifactor authentication and voice biometric technology to protect against AI fraud.

Nearly two-thirds of U.S. adults expressed concern about the inappropriate and often dangerous use of deepfakes and voice clones in the workplace, leading to fraud attacks, a halt in business operations, brand damage, and more.

Overall, 90% of respondents are concerned about generative AI technology, and that number will continue to increase as both awareness of deepfakes and the capabilities of technology grow.

If you want to understand how to better protect your organization against deepfake and voice clone technology, download the report to learn more.

More
Blogs