Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Homemade political deepfakes can fool voters, but may not beat plain text misinformation

by Vladimir Hedrih
April 30, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Follow PsyPost on Google News

A study conducted in Ireland found that political misinformation deepfakes created by an undergraduate student reduced viewers’ willingness to vote for the politicians targeted. However, these deepfakes were not consistently more effective than the same misinformation presented as simple text. The findings were published in Applied Cognitive Psychology.

Deepfakes are synthetic media created using artificial intelligence to replace one person’s likeness or voice with another’s, often in videos or audio recordings. They rely on deep learning techniques to produce highly realistic forgeries. Political deepfakes involve altering videos or speeches of public figures to make them appear to say or do things they never actually said or did.

Such content can be used to spread disinformation, manipulate public opinion, or erode trust in institutions. Political deepfakes are especially concerning during election campaigns, protests, or international crises. They can be hard to detect, particularly when shared widely on social media. While some deepfakes are clearly labeled as parody or satire, others are intended to mislead, incite conflict, defame opponents, or undermine democratic processes.

Study author Gillian Murphy and her colleagues set out to investigate how effective amateur political deepfakes—those created by an “average Joe”—are in shaping political opinions. To explore this, the team enlisted an undergraduate student (one of the study’s co-authors) and asked him to create the most convincing deepfakes he could, using only publicly available tools and information online. He was given no specialized training or equipment.

The researchers wanted to evaluate how these deepfakes influenced false memory formation (i.e., causing viewers to remember deepfaked events as real), political opinions, and voting intentions. They also examined whether viewers were suspicious of the content and if they could correctly identify the stories as fake.

The study involved 443 participants, recruited via Prolific and university mailing lists. The average age was 38, and 60% were women. All participants were native English speakers residing in Ireland.

The student was tasked with creating several deepfakes involving fabricated stories about Irish politicians. The scenarios were designed to be plausible but damaging to the politicians’ reputations. The targets included Simon Harris, the current Tánaiste (deputy prime minister), and Mary Lou McDonald, an opposition leader.

Participants were first shown a true story about Prime Minister Micheál Martin visiting Gaza. This was followed by one of the fabricated deepfake stories and a second true story about the other politician. These real stories served as controls to measure how the false content affected perceptions of the targeted politicians. The deepfakes were presented in three formats: audio-only, video (with image, headline, and text), and text-only.

After each news item, participants were asked if they remembered the event (to measure false memories), and to rate their political opinions (e.g., “I like [politician] personally”) and voting intentions (“I would vote for [politician] if I could”). Once all the stories had been presented, participants were asked whether they suspected any stories were false and whether they could identify which ones were fake.

One week later, participants who had been recruited via Prolific were invited to complete a follow-up study. This second study was nearly identical, but included both fake stories from the original study, a repeated filler story, and one new filler. This design helped the researchers assess whether the misinformation effects persisted over time.

Results from the first study showed that 6% of participants falsely remembered the fake event when it was presented in text-only format. This rose to 14% for audio or video deepfakes, and 25% when the deepfake was created using a paid service (resulting in higher quality). The influence of the deepfakes on political attitudes was modest overall and absent for one of the two stories.

In the case of Simon Harris, the deepfake created with a paid service reduced participants’ desire to vote for him by 23%, while the audio-only version reduced it by 31%. In contrast, deepfakes targeting Mary Lou McDonald had no effect on voting intentions.

About 14% of participants correctly guessed that the study was investigating misinformation. Depending on the story, 76% to 78% of participants correctly identified the fake stories as fake. However, 33% also incorrectly identified the true filler story as fake, suggesting some confusion or heightened skepticism.

In the follow-up study, between 86% and 92% of participants were able to correctly identify the fake story from the first study, depending on whether they had previously seen it.

“Overall, the current study serves as a litmus test for the present-day accessibility and potency of deepfake technology. We encourage other researchers to remain critical and evidence-based in their claims about emerging technologies and resist dystopian narratives about what an emerging technology ‘might’ do in the near future, when they are making claims about what the technology may do today,” the study authors concluded.

The study sheds light on the effects deepfakes have on political opinions of viewers. However, it is important to recognize the limitations. Participants were only exposed to a single deepfake (in the first study) or two deepfakes (in the second study), targeting different politicians. This is unlike real-world situations where misinformation—including deepfakes—is often repeated across multiple platforms, appears to come from diverse sources, and is reinforced over time.

The paper, “An Average Joe, a Laptop, and a Dream: Assessing the Potency of Homemade Political Deepfakes,” was authored by Gillian Murphy, Didier Ching, Eoghan Meehan, John Twomey, Aaron Bolger, and Conor Linehan.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often misrepresent scientific studies — and newer models may be worse

May 20, 2025

AI-driven summaries of scientific studies may be misleading the public. A new study found that most leading language models routinely produce overgeneralized conclusions, with newer versions performing worse than older ones—even when explicitly prompted to avoid inaccuracies.

Read moreDetails
New study upends decades-old narrative about Democrats and the white working class
Political Psychology

New study upends decades-old narrative about Democrats and the white working class

May 17, 2025

A new analysis disrupts decades of conventional wisdom: the white working class was not a reliable Democratic base in the postwar era. Instead, support for Republicans has been a longstanding trend dating back to the 1940s.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Artificial confidence? People feel more creative after viewing AI-labeled content

May 16, 2025

A new study suggests that when people see creative work labeled as AI-generated rather than human-made, they feel more confident in their own abilities. The effect appears across jokes, drawings, poems, and more—and might stem from subtle social comparison processes.

Read moreDetails
Political diversity in your social circle might come with a surprising trade-off
Political Psychology

Political diversity in your social circle might come with a surprising trade-off

May 14, 2025

People with politically mixed social circles may trust more of what they see on social media, including misinformation. A new study highlights an unexpected relationship between network diversity and belief in political content—true or false.

Read moreDetails
Twitter polls exhibit large pro-Trump bias — but these researchers have a fix
Political Psychology

Sharing false information online boosts visibility for Republican legislators, study finds

May 13, 2025

A new study reveals that U.S. state legislators who posted false or inflammatory content during times of political turmoil sometimes gained online visibility—especially Republicans spreading low-credibility claims. But uncivil language often had the opposite effect, particularly for extremists.

Read moreDetails
Left-wing authoritarians are less likely to support physically strong men as leaders
Authoritarianism

Left-wing authoritarians are less likely to support physically strong men as leaders

May 12, 2025

Do muscles make a man a better leader? That depends on your politics. A new study finds conservatives are drawn to strong men in leadership roles, while left-wing authoritarians are more likely to shy away from physical dominance.

Read moreDetails
AI-driven brain training reduces impulsiveness in kids with ADHD, study finds
ADHD

AI-driven brain training reduces impulsiveness in kids with ADHD, study finds

May 9, 2025

Researchers found that a personalized, game-based cognitive therapy powered by artificial intelligence significantly reduced impulsiveness and inattentiveness in children with ADHD. Brain scans showed signs of neurological improvement, highlighting the potential of AI tools in mental health treatment.

Read moreDetails
Neuroscientists use brain implants and AI to map language processing in real time
Artificial Intelligence

Neuroscientists use brain implants and AI to map language processing in real time

May 9, 2025

Researchers recorded brain activity during unscripted conversations and compared it to patterns in AI language models. The findings reveal a network of brain areas that track speech meaning and speaker transitions, offering a detailed picture of how we communicate.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

What brain scans reveal about the neural correlates of pornography consumption

AI chatbots often misrepresent scientific studies — and newer models may be worse

Is gender-affirming care helping or harming mental health?

Study finds “zombie” neurons in the peripheral nervous system contribute to chronic pain

Therapeutic video game shows promise for post-COVID cognitive recovery

Passive scrolling linked to increased anxiety in teens, study finds

Your bodily awareness guides your morality, new neuroscience study suggests

Where you flirt matters: New research shows setting shapes romantic success

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy