PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

by Karina Petrova
May 5, 2026
Reading Time: 4 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Artificial intelligence can be used to generate deceptive videos that damage a politician’s reputation, even when viewers suspect the footage is fake. A new study published in Communication Research found that these manipulated clips decrease support for targeted candidates. Standard fact-checking efforts reportedly fail to undo the total reputational harm.

Disinformation created using artificial intelligence is often regarded as a major threat to global elections. Technology now allows malicious actors to seamlessly replace a person’s face or clone their voice. These creations are commonly called deepfakes. Political operatives can use these tools to make opposing candidates appear to say outrageous or offensive things.

Michael Hameleers, a communication researcher at the University of Amsterdam, led a team to investigate how these videos influence the public. Hameleers and his colleagues Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber wanted to track voter reactions over time. They aimed to discover if these manipulated videos actually influence minds during an election cycle.

Visual information is known to heavily influence human perception. Because people are accustomed to believing their own eyes, video evidence often bypasses normal skepticism. The research team weighed this visual power against the brain’s tendency to detect inconsistencies. They wanted to know if a wildly uncharacteristic statement would override the visual proof of a realistic video.

Processing fluency is a psychological concept describing how easy information is to understand. When media is easy to consume, people tend to accept it more readily without critical thought. The researchers suspected that realistic video formats would prompt this mental shortcut, making the lies easier to digest. They wanted to measure if a smooth presentation could hide a blatant falsehood.

The team conducted their tests across two contrasting political landscapes. The United States features a highly polarized two-party system that is historically vulnerable to right-wing disinformation. The Netherlands operates under a multiparty system with higher general trust in the press, offering a more resilient media environment.

The researchers recruited over 3,000 adults across both countries. They designed a three-part experiment that took place over a full week in 2021. Participants answered questions at the start, were contacted again two days later, and completed a final survey three days after that.

During the surveys, participants were randomly assigned to watch either a genuine political address or a manipulated video. In the United States, the altered video featured Representative Nancy Pelosi. The artificial audio made it sound as though she sympathized with the rioters who breached the United States Capitol, suggesting Americans need to fight to win their country back.

Google News Preferences Add PsyPost to your preferred sources

In the Netherlands, the team selected a moderate Christian Democratic politician named Sybrand Buma. The manipulated footage showed him delivering an extremist, anti-immigrant monologue about protecting Dutch traditions from foreign influences. The messages were designed to completely contradict the established public personas of the two targets.

The project also tested potential defensive measures against digital deception. Some participants read a media literacy warning before watching the media. This introductory warning provided specific tips on how to question news sources and spot fabricated news items online.

Another group was shown a fact-check immediately after watching the video, which explicitly corrected the false claims. The correction messages offered point-by-point refutations of the statements made in the videos. These interventions mimicked the exact format used by professional journalism organizations.

Evaluating the results, the researchers found the audience largely saw through the deception. In both countries, participants rated the altered videos as far less believable than the genuine articles. The bizarre nature of the statements likely tipped viewers off that something was amiss with the footage.

Despite the structural differences between the two nations, the psychological trends remained remarkably consistent. Voters in the polarizing American system and the consensus-driven Dutch system reacted to the synthetic videos in nearly identical ways. The broad similarities imply that the vulnerability to artificial media transcends cultural borders.

Even though people correctly suspected the videos were fake, their opinions of the politicians still dropped. The deepfakes successfully damaged the reputations of both Pelosi and Buma. This finding highlights a mental disconnect between evaluating a video’s authenticity and absorbing its emotional weight.

The reputational damage was actually most severe among participants who initially supported the targeted politicians. Seeing a favored leader apparently voice extreme or contradictory views caused an immediate negative reaction. People who already disliked the politicians did not change their ratings much, mostly because their opinions were already entirely negative.

While the deepfakes changed how people felt about specific politicians, they did not shift overarching political beliefs. Participants in the United States did not suddenly support the Capitol riot after watching the Pelosi video. The deception altered judgments about the individual messenger rather than the message itself.

Showing the deceptive footage multiple times was expected to trigger an illusion of truth, where repeated falsehoods eventually feel familiar and accurate. In this experiment, seeing the video twice did deepen the reputation damage for the American participants. Yet, this repetition did not make the wild claims seem any more believable.

The consequences of watching the fabricated media were mostly temporary across both populations. By the end of the week, the negative feelings directed at the politicians had largely faded away. This outcome suggests a natural recovery period occurs when people step away from the false information in an isolated experiment.

The defensive interventions produced mixed outcomes for the tested audiences. Fact-checking the videos made participants even less likely to believe the footage was real. Yet, those exact same fact-checks completely failed to reverse the emotional damage done to the politicians’ reputations. Media literacy warnings produced almost no measurable impact at all.

The study authors noted a few limitations regarding their video selections. The chosen clips featured extreme shifts in political rhetoric, which made the deception easier to spot. Future projects might test subtle alterations to see if highly plausible fakes bypass human suspicion entirely.

The manipulated videos also contained minor visual glitches. Voice actors were used to simulate the politicians, meaning an observant viewer could detect slightly unnatural audio. As generation tools continue to evolve, these sensory flaws will likely disappear.

The research team recommends running future tests during live political campaigns. Tracking real-world reactions to actual digital propaganda would reveal how voters process media alongside competing news coverage. Such experiments could establish better boundaries for exactly how artificial intelligence shapes modern democracy.

The study, “Radical Right-Wing Political Deepfakes Can Successfully Delegitimize Targeted Political Actors: Evidence From Three-wave Experiments in the US and The Netherlands,” was authored by Michael Hameleers, Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber.

RELATED

Scientists studied Fox News — here’s what they discovered
Political Psychology

Fox News viewership linked to belief in a racist conspiracy theory

May 4, 2026
New psychology research links the tendency to feel victimized to support for political violence
Authoritarianism

Perceived grievance and psychological distress are linked to left-wing authoritarianism

May 4, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
New study shows how Nazi-era propaganda influences present-day attitudes
Political Psychology

New study shows how Nazi-era propaganda influences present-day attitudes

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
Both men and women view a partner’s financial investment in a rival as a major relationship threat
Mental Health

New study links identity politics to lower mental well-being among progressives

May 3, 2026
Premarital pregnancy does not predict poor marital outcomes when context is considered
Political Psychology

Conservative social attitudes are linked to higher fertility across 72 countries, with stronger effects among women

May 1, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Both men and women view a partner’s financial investment in a rival as a major relationship threat
  • Brain scans of 800 incarcerated men link psychopathy to an expanded cortical surface area
  • The gender friendship gap is driven primarily by white men, not a universal difference across groups
  • General intelligence explains the link between math and music skills
  • New study reveals a striking gap between sexual pleasure and overall satisfaction in the U.S.

Psychology of Selling

  • Why brand names like “Yum Yum” and “BonBon” taste sweeter to our brains
  • How the science of persuasion connects to B2B sales success
  • Can AI shopping assistants make consumers less willing to choose eco-friendly options?
  • Relying on financial bonuses might actually be driving your sales team away, new research suggests
  • Why the most emotionally skilled salespeople still underperform without one key ingredient

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc