A study conducted in Ireland found that political misinformation deepfakes created by an undergraduate student reduced viewers’ willingness to vote for the politicians targeted. However, these deepfakes were not consistently more effective than the same misinformation presented as simple text. The findings were published in Applied Cognitive Psychology.
Deepfakes are synthetic media created using artificial intelligence to replace one person’s likeness or voice with another’s, often in videos or audio recordings. They rely on deep learning techniques to produce highly realistic forgeries. Political deepfakes involve altering videos or speeches of public figures to make them appear to say or do things they never actually said or did.
Such content can be used to spread disinformation, manipulate public opinion, or erode trust in institutions. Political deepfakes are especially concerning during election campaigns, protests, or international crises. They can be hard to detect, particularly when shared widely on social media. While some deepfakes are clearly labeled as parody or satire, others are intended to mislead, incite conflict, defame opponents, or undermine democratic processes.
Study author Gillian Murphy and her colleagues set out to investigate how effective amateur political deepfakes—those created by an “average Joe”—are in shaping political opinions. To explore this, the team enlisted an undergraduate student (one of the study’s co-authors) and asked him to create the most convincing deepfakes he could, using only publicly available tools and information online. He was given no specialized training or equipment.
The researchers wanted to evaluate how these deepfakes influenced false memory formation (i.e., causing viewers to remember deepfaked events as real), political opinions, and voting intentions. They also examined whether viewers were suspicious of the content and if they could correctly identify the stories as fake.
The study involved 443 participants, recruited via Prolific and university mailing lists. The average age was 38, and 60% were women. All participants were native English speakers residing in Ireland.
The student was tasked with creating several deepfakes involving fabricated stories about Irish politicians. The scenarios were designed to be plausible but damaging to the politicians’ reputations. The targets included Simon Harris, the current Tánaiste (deputy prime minister), and Mary Lou McDonald, an opposition leader.
Participants were first shown a true story about Prime Minister Micheál Martin visiting Gaza. This was followed by one of the fabricated deepfake stories and a second true story about the other politician. These real stories served as controls to measure how the false content affected perceptions of the targeted politicians. The deepfakes were presented in three formats: audio-only, video (with image, headline, and text), and text-only.
After each news item, participants were asked if they remembered the event (to measure false memories), and to rate their political opinions (e.g., “I like [politician] personally”) and voting intentions (“I would vote for [politician] if I could”). Once all the stories had been presented, participants were asked whether they suspected any stories were false and whether they could identify which ones were fake.
One week later, participants who had been recruited via Prolific were invited to complete a follow-up study. This second study was nearly identical, but included both fake stories from the original study, a repeated filler story, and one new filler. This design helped the researchers assess whether the misinformation effects persisted over time.
Results from the first study showed that 6% of participants falsely remembered the fake event when it was presented in text-only format. This rose to 14% for audio or video deepfakes, and 25% when the deepfake was created using a paid service (resulting in higher quality). The influence of the deepfakes on political attitudes was modest overall and absent for one of the two stories.
In the case of Simon Harris, the deepfake created with a paid service reduced participants’ desire to vote for him by 23%, while the audio-only version reduced it by 31%. In contrast, deepfakes targeting Mary Lou McDonald had no effect on voting intentions.
About 14% of participants correctly guessed that the study was investigating misinformation. Depending on the story, 76% to 78% of participants correctly identified the fake stories as fake. However, 33% also incorrectly identified the true filler story as fake, suggesting some confusion or heightened skepticism.
In the follow-up study, between 86% and 92% of participants were able to correctly identify the fake story from the first study, depending on whether they had previously seen it.
“Overall, the current study serves as a litmus test for the present-day accessibility and potency of deepfake technology. We encourage other researchers to remain critical and evidence-based in their claims about emerging technologies and resist dystopian narratives about what an emerging technology ‘might’ do in the near future, when they are making claims about what the technology may do today,” the study authors concluded.
The study sheds light on the effects deepfakes have on political opinions of viewers. However, it is important to recognize the limitations. Participants were only exposed to a single deepfake (in the first study) or two deepfakes (in the second study), targeting different politicians. This is unlike real-world situations where misinformation—including deepfakes—is often repeated across multiple platforms, appears to come from diverse sources, and is reinforced over time.
The paper, “An Average Joe, a Laptop, and a Dream: Assessing the Potency of Homemade Political Deepfakes,” was authored by Gillian Murphy, Didier Ching, Eoghan Meehan, John Twomey, Aaron Bolger, and Conor Linehan.