Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI can change political opinions by flooding voters with real and fabricated facts

by Karina Petrova
December 9, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Artificial intelligence chatbots can alter human beliefs on political candidates and controversial policies, often outperforming traditional methods like television advertisements. Two new large-scale studies published in the journals Science and Nature demonstrate that these systems achieve this influence by overwhelming users with factual—and sometimes fabricated—information rather than by tailoring arguments to an individual’s personality.

Policy experts and social scientists have debated whether large language models might serve as tools for mass indoctrination or radicalization. Previous research into political advertising often indicated that voter preferences are stubborn and difficult to shift during high-stakes campaigns. It was largely unknown whether the interactive nature of generative AI would differ from static media in its ability to penetrate these psychological defenses.

To investigate this, two separate international teams orchestrated massive experiments involving tens of thousands of participants across four countries. One investigation was led by Kobi Hackenburg, a researcher at the University of Oxford, who sought to dismantle the specific mechanical levers that make AI convincing. The other team was led by Hause Lin, a cognitive scientist at the Massachusetts Institute of Technology, who focused on testing the practical impact of AI during active election cycles.

Lin and his colleagues recruited thousands of voters in the United States, Canada, and Poland to engage in text-based debates with an automated system. In the American experiment conducted prior to the 2024 election, participants discussed their views on Donald Trump and Kamala Harris. The researchers aimed to determine if the interactive nature of AI could succeed where passive media often fails.

The results demonstrated that a short conversation lasting roughly six minutes could shift voter preferences by significant margins. In the U.S. trials, the chatbot successfully moved support by nearly four points on a one-hundred-point scale. This effect size is considerably larger than what political strategists typically observe from standard video advertisements or digital flyers.

These shifts in opinion occurred even among participants with strong partisan identities. The artificial agents were particularly effective at softening the views of opposition voters, such as persuading Trump supporters to view Harris more favorably. This challenges the assumption that political polarization renders voters immune to opposing arguments.

Lin’s team re-contacted participants weeks later to determine if the effects were temporary. They found that approximately one-third of the initial shift in opinion persisted over time. This indicates that the interactions left a durable impression on voter attitudes rather than a fleeting emotional reaction.

International trials mirrored these results. Experiments conducted during election seasons in Canada and Poland showed that the persuasive power of AI is robust across different cultures and languages. When the researchers stripped the chatbots of their ability to use facts and evidence, their persuasive power dropped by more than half.

The study led by Lin also tested persuasion on a specific policy issue regarding a ballot measure to legalize psychedelic drugs in Massachusetts. This topic was less polarized along party lines than the presidential race. In this context, the artificial intelligence achieved even stronger results.

When the chatbot advocated for the measure, support among skeptics jumped by nearly fifteen points. Conversely, when the system argued against legalization, support among proponents dropped considerably. This implies AI may be especially powerful in local elections or single-issue campaigns where voters have less entrenched identities.

While Lin’s group examined electoral outcomes, Hackenburg and his team focused on the technical factors that drive persuasion. They coordinated a massive investigation involving nearly 77,000 participants in the United Kingdom. These individuals debated one of 707 different political issues ranging from healthcare to national security.

Hackenburg’s study tested 19 different language models to isolate the specific features that change minds. The data confirmed that engaging in a back-and-forth dialogue is substantially more effective than simply reading a static argument. The interactive element appears essential for breaking down user resistance.

A common hypothesis in the tech industry suggests that personalization is the ultimate tool for manipulation. However, Hackenburg found that tailoring arguments to a user’s age or political background provided only a negligible boost in effectiveness. Instead, the most potent strategy was “information density.”

The models became most convincing when they flooded the conversation with statistics, facts, and citations. This approach outperformed emotional appeals or moral reframing. The sheer volume of perceived evidence seemed to overwhelm the user’s ability to counter-argue.

The research also highlighted the importance of how a model is trained after its initial creation. Hackenburg used a technique called reward modeling to teach smaller systems to prioritize persuasive responses. A standard model running on modest hardware could match the persuasive capability of an advanced system like GPT-4 through this specialized fine-tuning.

Both studies uncovered a troubling relationship between persuasion and truth. The analysis in Hackenburg’s paper revealed that as models were optimized to be more persuasive, they became less factually accurate. When GPT-4 was prompted to win an argument at all costs, it began to hallucinate or fabricate information to support its case.

The study by Lin identified a similar pattern involving political bias. Researchers found that models advocating for right-wing candidates produced more false statements than those arguing for left-wing candidates. This asymmetry occurred even though the system received identical instructions to remain factual for both sides.

For instance, a model might falsely claim a candidate had a consistent record on an issue when they had actually flipped positions. It might also attribute economic data to the wrong administration to make a point. This happened regardless of which specific AI model the researchers tested.

This suggests that current language models may reflect biases or inaccuracies present in their training data. It also indicates a structural flaw where the goal of winning an argument competes with the goal of being truthful. The systems naturally drifted toward misleading claims when pressed to be convincing.

The researchers emphasize that accuracy did not seem to correlate with persuasiveness in their models. The bots that lied more were not necessarily more effective at changing minds. However, the prevalence of misinformation in generated text remains a technical hurdle.

The authors of both papers note specific limitations to their work. The experiments relied on survey participants who were compensated to engage with the AI. This setting differs from the chaotic environment of social media where users might simply ignore a chatbot.

Future research must determine if these persuasive effects hold true when users encounter automated accounts in the wild. It remains unclear whether hostile voters would willingly enter a six-minute debate with a stranger outside of a controlled study. Additionally, the challenge of ensuring AI adherence to strict truthfulness remains unsolved.

Regulatory bodies face the difficult task of managing this capability without stifling technology. The findings imply that restricting access to the largest models may be insufficient, as smaller open-source models can be fine-tuned for high persuasion. Ensuring transparency about when a voter is conversing with a machine will be a priority for maintaining the integrity of public discourse.

The study, “The levers of political persuasion with conversational artificial intelligence,” was authored by Kobi Hackenburg, Ben M. Tappin, Luke Hewitt, Ed Saunders, Sid Black, Hause Lin, Catherine Fist, Helen Margetts, David G. Rand, and Christopher Summerfield.

The study, “Persuading voters using human–artificial intelligence dialogues,” was authored by Hause Lin, Gabriela Czarnek, Benjamin Lewis, Joshua P. White, Adam J. Berinsky, Thomas Costello, Gordon Pennycook, and David G. Rand.

RELATED

AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

January 19, 2026
New study identifies a “woke” counterpart on the political right characterized by white grievance
Authoritarianism

New study identifies a “woke” counterpart on the political right characterized by white grievance

January 19, 2026
Trump supporters and insecure men more likely to value a large penis, according to new research
Political Psychology

Neuroticism linked to liberal ideology in young Americans, but not older generations

January 18, 2026
Google searches for racial slurs are higher in areas where people are worried about disease
Artificial Intelligence

Learning from AI summaries leads to shallower knowledge than web search

January 17, 2026
Neuroscientists find evidence meditation changes how fluid moves in the brain
Artificial Intelligence

Scientists show humans can “catch” fear from a breathing robot

January 16, 2026
Fear predicts authoritarian attitudes across cultures, with conservatives most affected
Authoritarianism

Study identifies two distinct types of populist voters driving support for strongman leaders

January 14, 2026
Dark personalities in politicians may intensify partisan hatred—particularly among their biggest fans
Donald Trump

Researchers identify personality traits linked to Trump’s “cult-like” followership

January 14, 2026
Too many choices at the ballot box has an unexpected effect on voters, study suggests
Political Psychology

Mortality rates increase in U.S. counties that vote for losing presidential candidates

January 12, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Depression’s impact on fairness perceptions depends on socioeconomic status

Early life adversity primes the body for persistent physical pain, new research suggests

Economic uncertainty linked to greater male aversion to female breadwinning

Women tend to downplay their gender in workplaces with masculinity contest cultures

Young people show posttraumatic growth after losing a parent, finding strength, meaning, and appreciation for life

MDMA-assisted therapy shows promise for long-term depression relief

Neuroscience study reveals that familiar rewards trigger motor preparation before a decision is made

Emotional abuse predicts self-loathing more strongly than other childhood traumas

RSS Psychology of Selling

  • How defending your opinion changes your confidence
  • The science behind why accessibility drives revenue in the fashion sector
  • How AI and political ideology intersect in the market for sensitive products
  • Researchers track how online shopping is related to stress
  • New study reveals why some powerful leaders admit mistakes while others double down
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy