>
Research (Insikt)

Targets, Objectives, and Emerging Tactics of Political Deepfakes

Publié : 24th September 2024
By: Insikt Group®

insikt-group-logo-updated-3-300x48.png

The rise of deepfakes poses significant threats to elections, public figures, and the media. Recent Insikt Group research highlights 82 deepfakes targeting public figures in 38 countries between July 2023 and July 2024. Deepfakes aimed at financial gain, election manipulation, character assassination, and spreading non-consensual pornography are on the rise. To counter these risks, organizations must act swiftly, increase awareness, and implement advanced AI detection tools.

2024 Deepfakes and Political Disinformation: Emerging Threats & Mitigation Strategies

The proliferation of AI-generated deepfakes is reshaping the political and disinformation landscape. Between July 2023 and July 2024, deepfakes impersonating public figures surfaced in 38 countries, raising concerns about election interference, character defamation, and more. Here’s a detailed breakdown of these emerging threats and strategies to mitigate them.

82 deepfakes were identified in 38 countries, with 30 nations holding elections during the dataset timeframe or having elections planned for 2024. Political figures, heads of state, candidates, and journalists were targeted, amplifying the potential to disrupt democratic processes.

Primary Objectives

Scams (26.8%): Deepfakes are frequently used to promote financial scams, leveraging heightened attention during elections. Prominent figures like Canadian Prime Minister Justin Trudeau and Mexican President-Elect Claudia Sheinbaum were impersonated in fraudulent schemes.

False Statements (25.6%): Deepfakes often fabricate public figures’ statements to mislead voters. For instance, fake audios emerged of UK Prime Minister Keir Starmer criticizing his own party, and Taiwan’s Ko Wen-Je making false accusations.

Electioneering (15.8%): Political parties increasingly use deepfakes to influence voter behavior. Turkey’s President Erdoğan used a deepfake to link an opposition leader to terrorist groups, while Argentina saw deepfakes in the Milei vs. Massa election battle.

Character Assassination (10.9%): Figures like Philippine President Ferdinand Marcos Jr. have been depicted engaging in unethical behavior, eroding public trust.

Non-consensual Pornography (10.9%): Women in politics are disproportionately targeted with deepfake pornography, creating reputational damage and deterring political participation.

Emerging Deepfake Tactics

Several new tactics have emerged, demonstrating how sophisticated deepfake operations have become:

  • Fake Whistleblowers: Using AI to create deepfakes of third-parties posing as whistleblowers, influence actors seek to manipulate public opinion by fabricating scandals.
  • Audio Deepfakes: A growing trend is the use of fake audio clips to create false statements, like President Biden allegedly urging voters to skip primaries.
  • Spoofing Media Assets: Influence actors increasingly use legitimate news branding, such as logos or overlays from France24 and BBC, to give their deepfakes credibility.
  • Foreign Leader Impersonation: Videos like those featuring China’s Xi Jinping or the US's Donald Trump have been repurposed to influence domestic elections in Taiwan and South Africa, respectively.
  • Family Member Impersonation: Deepfakes are even targeting family members of political figures, adding another layer of disinformation and manipulation.

The Impact of Deepfakes on Elections

Deepfakes have become a tool in political warfare. In Slovakia, for example, a deepfake audio emerged just before elections, spreading disinformation about electoral fraud, while Turkey saw a candidate withdraw from its presidential race after the release of an alleged deepfake sex tape. The disinformation potential of deepfakes, especially in volatile political climates, is vast. The risk of discrediting political candidates and spreading false narratives makes the need for advanced countermeasures more urgent.

Countering the Deepfake Threat

  • Speed and Accuracy: When deepfakes go viral, it’s essential to act quickly. Public figures should release authentic content to debunk false claims swiftly.
  • Familiarity Campaigns: Encourage public figures to increase their visibility, allowing people to become familiar with their true likenesses.
  • Copyright Leverage: Deepfakes that use copyrighted materials can be taken down via DMCA requests, providing a possible legal avenue to counter AI-generated disinformation.
  • Advanced AI Detection: Governments and media outlets need to invest in AI-powered detection tools to identify and take down deepfakes before they cause significant harm.
  • Collaboration with Fact-Checkers: Collaboration between social media platforms and fact-checking organizations will be essential in curbing the spread of deepfakes and false narratives.

Deepfakes will likely play a significant role in the 2024 US elections and global political processes, impacting outcomes by damaging reputations and eroding trust in elections. Regulations struggle to keep up with the evolving technology, especially in foreign interference and election manipulation. Research suggests that deepfakes, beyond a certain quality, don't necessarily need to be highly sophisticated to cause harm. Therefore, focusing on response and mitigation strategies is more effective, including exposing audiences to the real likeness of impersonated individuals and fact-checking narratives swiftly.

To read the entire analysis, click here to download the report as a PDF.

Related