Search
Close this search box.

Written by: Sam Reardon

Marketing Content Writer

Misinformation is not a new issue, but the rise of AI-generated content, in written, video, audio, and image formats, has brought it into a new light. A few decades ago, misinformation in journalism meant someone had to craft a well-thought-out lie, publish it in a journal, and disseminate it to the target audience. Now, all it takes is good AI software and a carefully planned prompt, and you’ll get a video, image, or audio, representing someone in a situation they’ve never been in.

Identifying the difference between reality and a deepfake is becoming increasingly difficult, so many are wondering—how can we trust journalism and the media in general?

The rise of deepfakes in journalism and media

Deepfake technology made its first appearance in November 2017, when a Reddit user shared an algorithm that could create realistic fake videos. Not long after, in early 2018, Jordan Peele shared fake videos of Obama. It was the first signal that the new AI technology could pose a serious threat by blurring the line between fact and fiction at a never-before-seen level.

While some of the first instances were created for entertainment purposes only and likely caused no real harm to those involved, it didn’t take long until things took a turn for the worse. Some started using deepfake videos to create elaborate scams, making people believe they were receiving messages from family members in distress

Then, videos of politicians began emerging, in an attempt to alter the public perception. As a 2024 systematic review briefly summarizes, “Deepfakes can cause political and religious strains between countries, deceive people, and impact elections.”  A good example is the recent incident with a deepfake of Kamala Harris, made to look like a presidential campaign video, which raised serious questions about future election integrity.

Notable deepfake incidents in journalism

Since its rise, deepfake technology has made way for a series of worrying incidents in the media. Recently, South Korea experienced a real deepfake crisis that saw hundreds of schools and universities targeted. 

The perpetrators created and distributed fake videos targeting underaged victims. This incident drew attention to the broader implications of deepfake technology in terms of privacy and safety, particularly for vulnerable populations.

The U.S. hasn’t been sheltered either, with fake images of Donald Trump being arrested circulating online just a few months ago. 

Last year, a CNN news anchor Anderson Cooper was the victim of a deepfake video where he appeared to be talking about Donald Trump in a less-than-flattering manner. Another AI-generated video portrayed Gayle King, a CBS Morning co-host, presenting a product she’d never used.

Challenges deepfakes pose to journalism

With the deepfake technologies improving each day, fact-checking and ensuring content authenticity is becoming harder for journalists worldwide. We’re at a point where nothing is off-limits and the risk to security is increasing. 

Abbas et al. state in their review that “satellite imagery can even be manipulated to incorporate things that do not exist: for instance, a fake bridge over a river, which can confuse military experts. In this way, deepfakes might mislead troops to target or defend a non-existent bridge.” Journalists must be more aware than ever of the challenges deepfakes pose.

Threat to Media Credibility

Journalism relies on trust, and the introduction of highly realistic but fake content makes it difficult for both journalists and audiences to confidently determine the truth. 

After seeing even one manipulated video presented as authentic by media outlets, the public will have serious doubts about their integrity, even when their reports are accurate.

Spread of misinformation & disinformation

Deepfakes make it increasingly easy to spread false information. With written content, journalists and readers are more likely to take everything with a grain of salt, looking for confirmation. But when looking at a video or an image, people are likely to believe what they see and they’re less likely to assume they’re looking at AI-generated content.

Erosion of public trust

“Fake news” is a common accusation nowadays and deepfakes only increase the problem, leading to skepticism toward the press. When audiences are unsure whether they can trust what they see or hear, the foundation of journalism—delivering accurate and reliable information—is severely compromised.

Impact of deepfakes on journalistic practices

Deepfakes are without a doubt a challenge for journalists. They impact not only how the public perceives the media, and their trust in news outlets, but journalistic practices as a whole.

Verification challenges

The first step to combat synthetic media and misinformation is to fact-check everything again and again. Traditional tools like checking sources, cross-referencing reports, or analyzing video metadata can prove insufficient. Journalists will need to use more advanced techniques and fact-checking tools to ensure they publish genuine information.

Legal and ethical considerations

The legal and ethical ramifications of deepfakes in journalism are profound. For one, media outlets have an ethical obligation to prevent disseminating manipulated materials. Additionally, publishing fake content could expose journalists to legal liability for defamation or other legal claims.

News organizations must develop rigorous standards for detecting and reporting deepfakes, finding a balance between the need for speed in the 24-hour news cycle and ensuring accuracy.

Need for new skills and tools

Traditional journalistic knowledge might not be enough in the landscape created by AI and deepfakes. Advanced fact-checking tools are needed to detect fake content.

Ethical considerations in the era of synthetic media

In 2018, a video went viral in India, showing a man on a motorbike kidnapping a child in the street. The video created panic and anger and led to 8 weeks of violence, during which 9 innocent people lost their lives. In the end, the video was proven to be fake.

AI has brought on what some call information warfare, with many trying to manipulate content to their advantage. Media outlets and journalists have a responsibility to prevent the spread of deepfakes, both for themselves and their reputation, but also for the public’s safety.

Maintaining Integrity in Journalism

Journalists must adhere to strict ethical standards to ensure that their reporting remains accurate and trustworthy. This includes having thorough verification processes, being transparent about potential uncertainties, and correcting errors swiftly when they occur. 

Media outlet responsibilities

If the incidents that took place so far showed us anything, it’s that media outlets have a significant responsibility in combating the rise of deepfakes. 

There is an increasing need for internal protocols and tools that can identify and prevent the spread of fake content. Media organizations also need to educate their employees and audiences about deepfakes and how to recognize them.

Strategies to combat deepfakes in journalism

We need to help stop the spread of misinformation and fake content, but how can we do that? Here are a few strategies.

Technological solutions for deepfake detection

To combat deepfakes effectively, you need to prioritize technological solutions to detect forgeries. AI-powered tools, such as deepfake detectors, analyze subtle inconsistencies in videos and audio that are imperceptible to the human eye or ear. 

These tools use algorithms to scan for signs of manipulation, such as unnatural facial movements or discrepancies in lighting, which can indicate that a video is not real.

AI-Powered Detection Tools

AI technologies help create deepfakes, but they can also help fight them. They can analyze massive amounts of data in real-time, identifying patterns that suggest manipulation. 

For instance, Pindrop® PulseTM Inspect can analyze audio files and detect synthetic voices. 

Content authentication

Another strategy for combating deepfakes is content authentication, which involves verifying the source and integrity of digital media at the moment of creation. 

An example would be blockchain technology, which you can use to create a digital imprint for videos and images so that they can’t be tampered with. Implementing these methods can help journalists prove their content is authentic, improving the public’s trust. 

Media literacy & public education

No matter how much journalists try, some deepfakes might still make their way to the public. That’s where media literacy and public education come in. By helping people understand the risks of deepfakes and how to recognize them, we can diminish the impact these have and reduce the instances when they cause real harm.

Collaboration between tech companies & news organizations

A lot of the responsibility for deepfakes falls on news organizations as they must stop their spread. But on their own, they might lack the resources to fight this phenomenon. Tech firms are leaders in AI development and can support journalists against deepfake creators. 

A collaboration between these two sectors can help create more advanced detection tools and shared protocols to verify the authenticity of media content.

The future of journalism in the age of deepfakes

Since synthetic media, algorithmic content creation, and fake news have become so prevalent, many wonder what the future of journalism will look like. Can journalism continue to exist as we know it? Will it have to change or will it disappear altogether? 

News outlets will need to invest in continuous training and education for their staff, equipping them with the skills necessary to identify and combat deepfakes. Emerging technologies, such as AI-driven verification systems and blockchain-based content authentication, will play a central role in this effort.

In the long term, the future of journalism will likely depend on its ability to preserve trust while embracing new tools and strategies to counter disinformation. Despite the challenges, journalists can stay one step ahead of the curve, helping ensure the public receives reliable information.

Combat deepfakes in journalism today

The threat posed by deepfakes is undeniable, but journalists, media organizations, and technology companies have the tools and expertise to help combat it. Simple, yet high-quality solutions are already within reach. 

One such example is Pindrop® PulseTM Inspect, a tool that helps you analyze your audio data and determine if the voice is genuine or synthetic. Pindrop® PulseTM Inspect offers real-time detection of synthetic voices so you can use it even when you’re running against the clock and be one step closer to delivering real information.

More
Blogs