Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI can change political opinions by flooding voters with real and fabricated facts

by Karina Petrova
December 9, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Artificial intelligence chatbots can alter human beliefs on political candidates and controversial policies, often outperforming traditional methods like television advertisements. Two new large-scale studies published in the journals Science and Nature demonstrate that these systems achieve this influence by overwhelming users with factual—and sometimes fabricated—information rather than by tailoring arguments to an individual’s personality.

Policy experts and social scientists have debated whether large language models might serve as tools for mass indoctrination or radicalization. Previous research into political advertising often indicated that voter preferences are stubborn and difficult to shift during high-stakes campaigns. It was largely unknown whether the interactive nature of generative AI would differ from static media in its ability to penetrate these psychological defenses.

To investigate this, two separate international teams orchestrated massive experiments involving tens of thousands of participants across four countries. One investigation was led by Kobi Hackenburg, a researcher at the University of Oxford, who sought to dismantle the specific mechanical levers that make AI convincing. The other team was led by Hause Lin, a cognitive scientist at the Massachusetts Institute of Technology, who focused on testing the practical impact of AI during active election cycles.

Lin and his colleagues recruited thousands of voters in the United States, Canada, and Poland to engage in text-based debates with an automated system. In the American experiment conducted prior to the 2024 election, participants discussed their views on Donald Trump and Kamala Harris. The researchers aimed to determine if the interactive nature of AI could succeed where passive media often fails.

The results demonstrated that a short conversation lasting roughly six minutes could shift voter preferences by significant margins. In the U.S. trials, the chatbot successfully moved support by nearly four points on a one-hundred-point scale. This effect size is considerably larger than what political strategists typically observe from standard video advertisements or digital flyers.

These shifts in opinion occurred even among participants with strong partisan identities. The artificial agents were particularly effective at softening the views of opposition voters, such as persuading Trump supporters to view Harris more favorably. This challenges the assumption that political polarization renders voters immune to opposing arguments.

Lin’s team re-contacted participants weeks later to determine if the effects were temporary. They found that approximately one-third of the initial shift in opinion persisted over time. This indicates that the interactions left a durable impression on voter attitudes rather than a fleeting emotional reaction.

International trials mirrored these results. Experiments conducted during election seasons in Canada and Poland showed that the persuasive power of AI is robust across different cultures and languages. When the researchers stripped the chatbots of their ability to use facts and evidence, their persuasive power dropped by more than half.

The study led by Lin also tested persuasion on a specific policy issue regarding a ballot measure to legalize psychedelic drugs in Massachusetts. This topic was less polarized along party lines than the presidential race. In this context, the artificial intelligence achieved even stronger results.

When the chatbot advocated for the measure, support among skeptics jumped by nearly fifteen points. Conversely, when the system argued against legalization, support among proponents dropped considerably. This implies AI may be especially powerful in local elections or single-issue campaigns where voters have less entrenched identities.

While Lin’s group examined electoral outcomes, Hackenburg and his team focused on the technical factors that drive persuasion. They coordinated a massive investigation involving nearly 77,000 participants in the United Kingdom. These individuals debated one of 707 different political issues ranging from healthcare to national security.

Hackenburg’s study tested 19 different language models to isolate the specific features that change minds. The data confirmed that engaging in a back-and-forth dialogue is substantially more effective than simply reading a static argument. The interactive element appears essential for breaking down user resistance.

A common hypothesis in the tech industry suggests that personalization is the ultimate tool for manipulation. However, Hackenburg found that tailoring arguments to a user’s age or political background provided only a negligible boost in effectiveness. Instead, the most potent strategy was “information density.”

The models became most convincing when they flooded the conversation with statistics, facts, and citations. This approach outperformed emotional appeals or moral reframing. The sheer volume of perceived evidence seemed to overwhelm the user’s ability to counter-argue.

The research also highlighted the importance of how a model is trained after its initial creation. Hackenburg used a technique called reward modeling to teach smaller systems to prioritize persuasive responses. A standard model running on modest hardware could match the persuasive capability of an advanced system like GPT-4 through this specialized fine-tuning.

Both studies uncovered a troubling relationship between persuasion and truth. The analysis in Hackenburg’s paper revealed that as models were optimized to be more persuasive, they became less factually accurate. When GPT-4 was prompted to win an argument at all costs, it began to hallucinate or fabricate information to support its case.

The study by Lin identified a similar pattern involving political bias. Researchers found that models advocating for right-wing candidates produced more false statements than those arguing for left-wing candidates. This asymmetry occurred even though the system received identical instructions to remain factual for both sides.

For instance, a model might falsely claim a candidate had a consistent record on an issue when they had actually flipped positions. It might also attribute economic data to the wrong administration to make a point. This happened regardless of which specific AI model the researchers tested.

This suggests that current language models may reflect biases or inaccuracies present in their training data. It also indicates a structural flaw where the goal of winning an argument competes with the goal of being truthful. The systems naturally drifted toward misleading claims when pressed to be convincing.

The researchers emphasize that accuracy did not seem to correlate with persuasiveness in their models. The bots that lied more were not necessarily more effective at changing minds. However, the prevalence of misinformation in generated text remains a technical hurdle.

The authors of both papers note specific limitations to their work. The experiments relied on survey participants who were compensated to engage with the AI. This setting differs from the chaotic environment of social media where users might simply ignore a chatbot.

Future research must determine if these persuasive effects hold true when users encounter automated accounts in the wild. It remains unclear whether hostile voters would willingly enter a six-minute debate with a stranger outside of a controlled study. Additionally, the challenge of ensuring AI adherence to strict truthfulness remains unsolved.

Regulatory bodies face the difficult task of managing this capability without stifling technology. The findings imply that restricting access to the largest models may be insufficient, as smaller open-source models can be fine-tuned for high persuasion. Ensuring transparency about when a voter is conversing with a machine will be a priority for maintaining the integrity of public discourse.

The study, “The levers of political persuasion with conversational artificial intelligence,” was authored by Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, and Christopher Summerfield.

The study, “Persuading voters using human–artificial intelligence dialogues,” was authored by Hause Lin, Gabriela Czarnek, Benjamin Lewis, Joshua P. White, Adam J. Berinsky, Thomas Costello, Gordon Pennycook, and David G. Rand.

RELATED

Childhood adversity linked to poorer cognitive function across different patterns of aging
Political Psychology

No evidence of “beauty is beastly effect” found in German federal elections

December 8, 2025
Childhood adversity linked to poorer cognitive function across different patterns of aging
Political Psychology

New study finds political differences predict lower relationship quality

December 8, 2025
Common left-right political scale masks anti-establishment views at the center
Political Psychology

Common left-right political scale masks anti-establishment views at the center

December 7, 2025
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Artificial Intelligence

Humans and AI both rate deliberate thinkers as smarter than intuitive ones

December 5, 2025
People struggle to separate argument quality from their own political opinions
Political Psychology

People struggle to separate argument quality from their own political opinions

December 5, 2025
Endorsing easily disproved lies acts as a psychological “power move” for some
Authoritarianism

Endorsing easily disproved lies acts as a psychological “power move” for some

December 2, 2025
Song lyrics have become simpler, more negative, and more self-focused over time
Artificial Intelligence

An “AI” label fails to trigger negative bias in new pop music study

November 30, 2025
Daughters who feel more attractive report stronger, more protective bonds with their fathers
Artificial Intelligence

Learning via ChatGPT leads to shallower knowledge than using Google search, study finds

November 30, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Childhood trauma linked to worse outcomes in mindfulness therapy for depression

Semaglutide helps manage metabolic side effects of antipsychotic drugs

Supportive marriage linked to lower obesity risk through novel brain-gut pathway

AI can change political opinions by flooding voters with real and fabricated facts

Frequent cannabis use rose among California teens after legalization

New neuroscience research reveals surprising biological link between beauty and brain energy

How partners talk about sex plays a key role in the link between attachment and satisfaction

Purpose in life acts as a psychological shield against depression, new study indicates

RSS Psychology of Selling

  • New study maps the psychology behind the post-holiday return surge
  • Unlocking the neural pathways of influence
  • How virtual backgrounds influence livestream sales
  • Brain wiring predicts preference for emotional versus logical persuasion
  • What science reveals about the Black Friday shopping frenzy
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy