Artificial intelligence can be used to generate deceptive videos that damage a politician’s reputation, even when viewers suspect the footage is fake. A new study published in Communication Research found that these manipulated clips decrease support for targeted candidates. Standard fact-checking efforts reportedly fail to undo the total reputational harm.
Disinformation created using artificial intelligence is often regarded as a major threat to global elections. Technology now allows malicious actors to seamlessly replace a person’s face or clone their voice. These creations are commonly called deepfakes. Political operatives can use these tools to make opposing candidates appear to say outrageous or offensive things.
Michael Hameleers, a communication researcher at the University of Amsterdam, led a team to investigate how these videos influence the public. Hameleers and his colleagues Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber wanted to track voter reactions over time. They aimed to discover if these manipulated videos actually influence minds during an election cycle.
Visual information is known to heavily influence human perception. Because people are accustomed to believing their own eyes, video evidence often bypasses normal skepticism. The research team weighed this visual power against the brain’s tendency to detect inconsistencies. They wanted to know if a wildly uncharacteristic statement would override the visual proof of a realistic video.
Processing fluency is a psychological concept describing how easy information is to understand. When media is easy to consume, people tend to accept it more readily without critical thought. The researchers suspected that realistic video formats would prompt this mental shortcut, making the lies easier to digest. They wanted to measure if a smooth presentation could hide a blatant falsehood.
The team conducted their tests across two contrasting political landscapes. The United States features a highly polarized two-party system that is historically vulnerable to right-wing disinformation. The Netherlands operates under a multiparty system with higher general trust in the press, offering a more resilient media environment.
The researchers recruited over 3,000 adults across both countries. They designed a three-part experiment that took place over a full week in 2021. Participants answered questions at the start, were contacted again two days later, and completed a final survey three days after that.
During the surveys, participants were randomly assigned to watch either a genuine political address or a manipulated video. In the United States, the altered video featured Representative Nancy Pelosi. The artificial audio made it sound as though she sympathized with the rioters who breached the United States Capitol, suggesting Americans need to fight to win their country back.
In the Netherlands, the team selected a moderate Christian Democratic politician named Sybrand Buma. The manipulated footage showed him delivering an extremist, anti-immigrant monologue about protecting Dutch traditions from foreign influences. The messages were designed to completely contradict the established public personas of the two targets.
The project also tested potential defensive measures against digital deception. Some participants read a media literacy warning before watching the media. This introductory warning provided specific tips on how to question news sources and spot fabricated news items online.
Another group was shown a fact-check immediately after watching the video, which explicitly corrected the false claims. The correction messages offered point-by-point refutations of the statements made in the videos. These interventions mimicked the exact format used by professional journalism organizations.
Evaluating the results, the researchers found the audience largely saw through the deception. In both countries, participants rated the altered videos as far less believable than the genuine articles. The bizarre nature of the statements likely tipped viewers off that something was amiss with the footage.
Despite the structural differences between the two nations, the psychological trends remained remarkably consistent. Voters in the polarizing American system and the consensus-driven Dutch system reacted to the synthetic videos in nearly identical ways. The broad similarities imply that the vulnerability to artificial media transcends cultural borders.
Even though people correctly suspected the videos were fake, their opinions of the politicians still dropped. The deepfakes successfully damaged the reputations of both Pelosi and Buma. This finding highlights a mental disconnect between evaluating a video’s authenticity and absorbing its emotional weight.
The reputational damage was actually most severe among participants who initially supported the targeted politicians. Seeing a favored leader apparently voice extreme or contradictory views caused an immediate negative reaction. People who already disliked the politicians did not change their ratings much, mostly because their opinions were already entirely negative.
While the deepfakes changed how people felt about specific politicians, they did not shift overarching political beliefs. Participants in the United States did not suddenly support the Capitol riot after watching the Pelosi video. The deception altered judgments about the individual messenger rather than the message itself.
Showing the deceptive footage multiple times was expected to trigger an illusion of truth, where repeated falsehoods eventually feel familiar and accurate. In this experiment, seeing the video twice did deepen the reputation damage for the American participants. Yet, this repetition did not make the wild claims seem any more believable.
The consequences of watching the fabricated media were mostly temporary across both populations. By the end of the week, the negative feelings directed at the politicians had largely faded away. This outcome suggests a natural recovery period occurs when people step away from the false information in an isolated experiment.
The defensive interventions produced mixed outcomes for the tested audiences. Fact-checking the videos made participants even less likely to believe the footage was real. Yet, those exact same fact-checks completely failed to reverse the emotional damage done to the politicians’ reputations. Media literacy warnings produced almost no measurable impact at all.
The study authors noted a few limitations regarding their video selections. The chosen clips featured extreme shifts in political rhetoric, which made the deception easier to spot. Future projects might test subtle alterations to see if highly plausible fakes bypass human suspicion entirely.
The manipulated videos also contained minor visual glitches. Voice actors were used to simulate the politicians, meaning an observant viewer could detect slightly unnatural audio. As generation tools continue to evolve, these sensory flaws will likely disappear.
The research team recommends running future tests during live political campaigns. Tracking real-world reactions to actual digital propaganda would reveal how voters process media alongside competing news coverage. Such experiments could establish better boundaries for exactly how artificial intelligence shapes modern democracy.
The study, “Radical Right-Wing Political Deepfakes Can Successfully Delegitimize Targeted Political Actors: Evidence From Three-wave Experiments in the US and The Netherlands,” was authored by Michael Hameleers, Toni G. L. A. van der Meer, Marina Tulin, and Tom Dobber.