Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

by Karina Petrova
October 4, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study shows that artificial intelligence chatbots with political biases can sway the opinions and decisions of users who interact with them. This effect occurred regardless of a person’s own political party and after only a few conversational exchanges. The research, presented at the Association for Computational Linguistics conference, also suggests that greater user knowledge about artificial intelligence may help reduce this influence.

Large language models are complex artificial intelligence systems trained on immense quantities of text and data from the internet. Because this training data contains inherent human biases, the models often reflect and can even amplify these biases in their responses. While the existence of bias in these systems is well-documented, less was understood about how these biases affect people during dynamic, back-and-forth conversations.

Jillian Fisher, the study’s first author from the University of Washington, and a team of colleagues wanted to investigate this gap. Previous research had often used static, pre-written artificial intelligence content or focused on impersonal tasks, which might not engage a person’s own values. The researchers designed a new study to see how interactive conversations with biased language models would influence people’s personal political opinions and decisions.

To conduct their experiment, the researchers recruited 299 participants from the online platform Prolific who identified as either Democrats or Republicans. These participants were then randomly assigned to interact with one of three versions of a language model. One version was a neutral base model, a second was engineered to have a liberal bias, and a third was given a conservative bias. The researchers created the biased versions by adding hidden instructions, or prefixes, to every participant input, such as “Respond as a radical left U.S. Democrat.” The participants were not told that the model they were using might be biased.

Each participant completed two tasks. The first was a Topic Opinion Task. In this task, participants were asked for their opinions on relatively obscure political topics, such as covenant marriages or the Lacey Act of 1900. These topics were chosen because participants were unlikely to have strong pre-existing views on them. After stating their initial opinion, participants engaged in a conversation with their assigned chatbot to learn more before reporting their opinion again.

The results of this task showed that the chatbots had a significant influence. Participants who conversed with a biased model were more likely to shift their opinions in the direction of that model’s bias compared to those who used the neutral model. For example, on a topic typically supported by conservatives, Democrats who spoke with a liberal-biased model became less supportive of the topic. Conversely, Democrats who spoke with a conservative-biased model became more supportive of it. The same pattern held for Republicans, though the effect was smaller on conservative topics, likely because their initial support was already high.

“We know that bias in media or in personal interactions can sway people,” said lead author Jillian Fisher, a doctoral student at the University of Washington. “And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

The second part of the experiment was a Budget Allocation Task. Here, participants were asked to act as a city mayor and distribute a budget across four government sectors: Public Safety, Education, Veteran Services, and Welfare. Participants first made an initial budget allocation. Then they submitted it to the chatbot for feedback and were encouraged to discuss it further. After the conversation, they made their final budget allocation.

Google News Preferences Add PsyPost to your preferred sources

The findings from this task were even more pronounced. Interactions with the biased models strongly affected how participants decided to allocate funds. For instance, Democrats who interacted with a conservative-biased model significantly decreased their funding for Education and increased funding for Veterans and Public Safety. In a similar way, Republicans who interacted with a liberal-biased model increased their allocations for Education and Welfare while decreasing funds for Public Safety. In both cases, participants shifted their decisions to align with the political leanings of the chatbot they spoke with.

The research team also looked at factors that might change how susceptible a person is to the model’s influence. They found that participants who reported having more knowledge about artificial intelligence were slightly less affected by the model’s bias. This finding suggests that educating the public about how these systems work could be a potential way to mitigate their persuasive effects.

However, the study revealed that simply being aware of the bias was not enough to counter its effect. About half of the participants who interacted with a biased model correctly identified it as biased when asked after the experiment. Yet, this group was just as influenced by the model as the participants who did not detect any bias. This indicates that even when people recognize that the information source is skewed, they may still be persuaded by it.

To understand how the models were so persuasive, the researchers analyzed the chat transcripts from the budget task. They found that the biased models did not use different persuasion techniques, like appealing to authority, more often than the neutral model. Instead, the bias was expressed through framing. The conservative-biased model framed its arguments around themes of “Security and Defense” and “Health and Safety,” using phrases about protecting citizens and supporting veterans. The liberal-biased model focused its framing on “Fairness and Equality” and “Economic” issues, emphasizing support for the vulnerable and investing in an equitable society.

“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author Katharina Reinecke, a professor at the University of Washington. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

The study includes some limitations that the authors acknowledge. The research focused on political affiliations within the United States, so the findings may not apply to other political systems. The experiment also limited the number of interactions participants could have with the chatbot and only measured the immediate effects on their opinions. Future work could explore if these changes in opinion persist over longer periods and how prolonged interactions with biased systems might affect users. Additionally, the study used only one type of large language model, so more research is needed to see if these effects are consistent across different artificial intelligence systems.

The study, “Biased LLMs can Influence Political Decision-Making,” was authored by Jillian Fisher, Shangbin Feng, Robert Aron, Thomas Richardson, Yejin Choi, Daniel W. Fisher, Jennifer Pan, Yulia Tsvetkov, and Katharina Reinecke.

Previous Post

Neuroscientists are starting to unravel the amygdala’s complexity, shedding new light on PTSD

Next Post

Does bad sleep make ADHD homework problems worse?

RELATED

Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Therapists test an AI dating simulator to help chronically single men practice romantic skills

March 9, 2026
New psychology research sheds light on the mystery of deja vu
Political Psychology

Black Lives Matter protests sparked a short-term conservative backlash but ultimately shifted the 2020 election towards Democrats

March 9, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
A psychological need for certainty is associated with radical right voting
Personality Psychology

A psychological need for certainty is associated with radical right voting

March 7, 2026
Pro-environmental behavior is exaggerated on self-report questionnaires, particularly among those with stronger environmentalist identity
Climate

Conservatives underestimate the environmental impact of sustainable behaviors compared to liberals

March 5, 2026
Common left-right political scale masks anti-establishment views at the center
Political Psychology

American issue polarization surged after 2008 as the left moved further left

March 5, 2026
Evolutionary psychology reveals patterns in mass murder motivations across life stages
Authoritarianism

Psychological network analysis reveals how inner self-compassion connects to outward social attitudes

March 5, 2026
Republicans’ pro-democracy speeches after January 6 had no impact on Trump supporters, study suggests
Conspiracy Theories

Trump voters who believed conspiracy theories were the most likely to justify the Jan. 6 riots

March 5, 2026

STAY CONNECTED

LATEST

Finger length ratios offer clues to how the womb shapes sexual orientation

Study links parents’ perceived financial strain to delayed brain development in infants

Genetic factors drive the link between cognitive ability and socioeconomic status

How viral infections disrupt memory and thinking skills

Everyday mental quirks like déjà vu might be natural byproducts of a resting mind

New analysis shows ideology, not science, drove the global prohibition of psychedelics

People with psychopathic traits don’t lack fear—they actually enjoy it

Scientists use “dream engineering” to boost creative problem-solving during REM sleep

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc