PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

by Karina Petrova
October 4, 2025
Reading Time: 4 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study shows that artificial intelligence chatbots with political biases can sway the opinions and decisions of users who interact with them. This effect occurred regardless of a person’s own political party and after only a few conversational exchanges. The research, presented at the Association for Computational Linguistics conference, also suggests that greater user knowledge about artificial intelligence may help reduce this influence.

Large language models are complex artificial intelligence systems trained on immense quantities of text and data from the internet. Because this training data contains inherent human biases, the models often reflect and can even amplify these biases in their responses. While the existence of bias in these systems is well-documented, less was understood about how these biases affect people during dynamic, back-and-forth conversations.

Jillian Fisher, the study’s first author from the University of Washington, and a team of colleagues wanted to investigate this gap. Previous research had often used static, pre-written artificial intelligence content or focused on impersonal tasks, which might not engage a person’s own values. The researchers designed a new study to see how interactive conversations with biased language models would influence people’s personal political opinions and decisions.

To conduct their experiment, the researchers recruited 299 participants from the online platform Prolific who identified as either Democrats or Republicans. These participants were then randomly assigned to interact with one of three versions of a language model. One version was a neutral base model, a second was engineered to have a liberal bias, and a third was given a conservative bias. The researchers created the biased versions by adding hidden instructions, or prefixes, to every participant input, such as “Respond as a radical left U.S. Democrat.” The participants were not told that the model they were using might be biased.

Each participant completed two tasks. The first was a Topic Opinion Task. In this task, participants were asked for their opinions on relatively obscure political topics, such as covenant marriages or the Lacey Act of 1900. These topics were chosen because participants were unlikely to have strong pre-existing views on them. After stating their initial opinion, participants engaged in a conversation with their assigned chatbot to learn more before reporting their opinion again.

The results of this task showed that the chatbots had a significant influence. Participants who conversed with a biased model were more likely to shift their opinions in the direction of that model’s bias compared to those who used the neutral model. For example, on a topic typically supported by conservatives, Democrats who spoke with a liberal-biased model became less supportive of the topic. Conversely, Democrats who spoke with a conservative-biased model became more supportive of it. The same pattern held for Republicans, though the effect was smaller on conservative topics, likely because their initial support was already high.

“We know that bias in media or in personal interactions can sway people,” said lead author Jillian Fisher, a doctoral student at the University of Washington. “And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

The second part of the experiment was a Budget Allocation Task. Here, participants were asked to act as a city mayor and distribute a budget across four government sectors: Public Safety, Education, Veteran Services, and Welfare. Participants first made an initial budget allocation. Then they submitted it to the chatbot for feedback and were encouraged to discuss it further. After the conversation, they made their final budget allocation.

Google News Preferences Add PsyPost to your preferred sources

The findings from this task were even more pronounced. Interactions with the biased models strongly affected how participants decided to allocate funds. For instance, Democrats who interacted with a conservative-biased model significantly decreased their funding for Education and increased funding for Veterans and Public Safety. In a similar way, Republicans who interacted with a liberal-biased model increased their allocations for Education and Welfare while decreasing funds for Public Safety. In both cases, participants shifted their decisions to align with the political leanings of the chatbot they spoke with.

The research team also looked at factors that might change how susceptible a person is to the model’s influence. They found that participants who reported having more knowledge about artificial intelligence were slightly less affected by the model’s bias. This finding suggests that educating the public about how these systems work could be a potential way to mitigate their persuasive effects.

However, the study revealed that simply being aware of the bias was not enough to counter its effect. About half of the participants who interacted with a biased model correctly identified it as biased when asked after the experiment. Yet, this group was just as influenced by the model as the participants who did not detect any bias. This indicates that even when people recognize that the information source is skewed, they may still be persuaded by it.

To understand how the models were so persuasive, the researchers analyzed the chat transcripts from the budget task. They found that the biased models did not use different persuasion techniques, like appealing to authority, more often than the neutral model. Instead, the bias was expressed through framing. The conservative-biased model framed its arguments around themes of “Security and Defense” and “Health and Safety,” using phrases about protecting citizens and supporting veterans. The liberal-biased model focused its framing on “Fairness and Equality” and “Economic” issues, emphasizing support for the vulnerable and investing in an equitable society.

“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author Katharina Reinecke, a professor at the University of Washington. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

The study includes some limitations that the authors acknowledge. The research focused on political affiliations within the United States, so the findings may not apply to other political systems. The experiment also limited the number of interactions participants could have with the chatbot and only measured the immediate effects on their opinions. Future work could explore if these changes in opinion persist over longer periods and how prolonged interactions with biased systems might affect users. Additionally, the study used only one type of large language model, so more research is needed to see if these effects are consistent across different artificial intelligence systems.

The study, “Biased LLMs can Influence Political Decision-Making,” was authored by Jillian Fisher, Shangbin Feng, Robert Aron, Thomas Richardson, Yejin Choi, Daniel W. Fisher, Jennifer Pan, Yulia Tsvetkov, and Katharina Reinecke.

RELATED

Scientists studied Fox News — here’s what they discovered
Political Psychology

Fox News viewership linked to belief in a racist conspiracy theory

May 4, 2026
New psychology research links the tendency to feel victimized to support for political violence
Authoritarianism

Perceived grievance and psychological distress are linked to left-wing authoritarianism

May 4, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
New study shows how Nazi-era propaganda influences present-day attitudes
Political Psychology

New study shows how Nazi-era propaganda influences present-day attitudes

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
Both men and women view a partner’s financial investment in a rival as a major relationship threat
Mental Health

New study links identity politics to lower mental well-being among progressives

May 3, 2026
Premarital pregnancy does not predict poor marital outcomes when context is considered
Political Psychology

Conservative social attitudes are linked to higher fertility across 72 countries, with stronger effects among women

May 1, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Both men and women view a partner’s financial investment in a rival as a major relationship threat
  • Brain scans of 800 incarcerated men link psychopathy to an expanded cortical surface area
  • The gender friendship gap is driven primarily by white men, not a universal difference across groups
  • General intelligence explains the link between math and music skills
  • New study reveals a striking gap between sexual pleasure and overall satisfaction in the U.S.

Psychology of Selling

  • How the science of persuasion connects to B2B sales success
  • Can AI shopping assistants make consumers less willing to choose eco-friendly options?
  • Relying on financial bonuses might actually be driving your sales team away, new research suggests
  • Why the most emotionally skilled salespeople still underperform without one key ingredient
  • Why cramped spaces sometimes make customers happier: The surprising science of “spatial captivity”

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc