Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

by Karina Petrova
October 4, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study shows that artificial intelligence chatbots with political biases can sway the opinions and decisions of users who interact with them. This effect occurred regardless of a person’s own political party and after only a few conversational exchanges. The research, presented at the Association for Computational Linguistics conference, also suggests that greater user knowledge about artificial intelligence may help reduce this influence.

Large language models are complex artificial intelligence systems trained on immense quantities of text and data from the internet. Because this training data contains inherent human biases, the models often reflect and can even amplify these biases in their responses. While the existence of bias in these systems is well-documented, less was understood about how these biases affect people during dynamic, back-and-forth conversations.

Jillian Fisher, the study’s first author from the University of Washington, and a team of colleagues wanted to investigate this gap. Previous research had often used static, pre-written artificial intelligence content or focused on impersonal tasks, which might not engage a person’s own values. The researchers designed a new study to see how interactive conversations with biased language models would influence people’s personal political opinions and decisions.

To conduct their experiment, the researchers recruited 299 participants from the online platform Prolific who identified as either Democrats or Republicans. These participants were then randomly assigned to interact with one of three versions of a language model. One version was a neutral base model, a second was engineered to have a liberal bias, and a third was given a conservative bias. The researchers created the biased versions by adding hidden instructions, or prefixes, to every participant input, such as “Respond as a radical left U.S. Democrat.” The participants were not told that the model they were using might be biased.

Each participant completed two tasks. The first was a Topic Opinion Task. In this task, participants were asked for their opinions on relatively obscure political topics, such as covenant marriages or the Lacey Act of 1900. These topics were chosen because participants were unlikely to have strong pre-existing views on them. After stating their initial opinion, participants engaged in a conversation with their assigned chatbot to learn more before reporting their opinion again.

The results of this task showed that the chatbots had a significant influence. Participants who conversed with a biased model were more likely to shift their opinions in the direction of that model’s bias compared to those who used the neutral model. For example, on a topic typically supported by conservatives, Democrats who spoke with a liberal-biased model became less supportive of the topic. Conversely, Democrats who spoke with a conservative-biased model became more supportive of it. The same pattern held for Republicans, though the effect was smaller on conservative topics, likely because their initial support was already high.

“We know that bias in media or in personal interactions can sway people,” said lead author Jillian Fisher, a doctoral student at the University of Washington. “And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

The second part of the experiment was a Budget Allocation Task. Here, participants were asked to act as a city mayor and distribute a budget across four government sectors: Public Safety, Education, Veteran Services, and Welfare. Participants first made an initial budget allocation. Then they submitted it to the chatbot for feedback and were encouraged to discuss it further. After the conversation, they made their final budget allocation.

The findings from this task were even more pronounced. Interactions with the biased models strongly affected how participants decided to allocate funds. For instance, Democrats who interacted with a conservative-biased model significantly decreased their funding for Education and increased funding for Veterans and Public Safety. In a similar way, Republicans who interacted with a liberal-biased model increased their allocations for Education and Welfare while decreasing funds for Public Safety. In both cases, participants shifted their decisions to align with the political leanings of the chatbot they spoke with.

The research team also looked at factors that might change how susceptible a person is to the model’s influence. They found that participants who reported having more knowledge about artificial intelligence were slightly less affected by the model’s bias. This finding suggests that educating the public about how these systems work could be a potential way to mitigate their persuasive effects.

However, the study revealed that simply being aware of the bias was not enough to counter its effect. About half of the participants who interacted with a biased model correctly identified it as biased when asked after the experiment. Yet, this group was just as influenced by the model as the participants who did not detect any bias. This indicates that even when people recognize that the information source is skewed, they may still be persuaded by it.

To understand how the models were so persuasive, the researchers analyzed the chat transcripts from the budget task. They found that the biased models did not use different persuasion techniques, like appealing to authority, more often than the neutral model. Instead, the bias was expressed through framing. The conservative-biased model framed its arguments around themes of “Security and Defense” and “Health and Safety,” using phrases about protecting citizens and supporting veterans. The liberal-biased model focused its framing on “Fairness and Equality” and “Economic” issues, emphasizing support for the vulnerable and investing in an equitable society.

“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author Katharina Reinecke, a professor at the University of Washington. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

The study includes some limitations that the authors acknowledge. The research focused on political affiliations within the United States, so the findings may not apply to other political systems. The experiment also limited the number of interactions participants could have with the chatbot and only measured the immediate effects on their opinions. Future work could explore if these changes in opinion persist over longer periods and how prolonged interactions with biased systems might affect users. Additionally, the study used only one type of large language model, so more research is needed to see if these effects are consistent across different artificial intelligence systems.

The study, “Biased LLMs can Influence Political Decision-Making,” was authored by Jillian Fisher, Shangbin Feng, Robert Aron, Thomas Richardson, Yejin Choi, Daniel W. Fisher, Jennifer Pan, Yulia Tsvetkov, and Katharina Reinecke.

RELATED

AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

January 19, 2026
New study identifies a “woke” counterpart on the political right characterized by white grievance
Authoritarianism

New study identifies a “woke” counterpart on the political right characterized by white grievance

January 19, 2026
Trump supporters and insecure men more likely to value a large penis, according to new research
Political Psychology

Neuroticism linked to liberal ideology in young Americans, but not older generations

January 18, 2026
Google searches for racial slurs are higher in areas where people are worried about disease
Artificial Intelligence

Learning from AI summaries leads to shallower knowledge than web search

January 17, 2026
Neuroscientists find evidence meditation changes how fluid moves in the brain
Artificial Intelligence

Scientists show humans can “catch” fear from a breathing robot

January 16, 2026
Fear predicts authoritarian attitudes across cultures, with conservatives most affected
Authoritarianism

Study identifies two distinct types of populist voters driving support for strongman leaders

January 14, 2026
Dark personalities in politicians may intensify partisan hatred—particularly among their biggest fans
Donald Trump

Researchers identify personality traits linked to Trump’s “cult-like” followership

January 14, 2026
Too many choices at the ballot box has an unexpected effect on voters, study suggests
Political Psychology

Mortality rates increase in U.S. counties that vote for losing presidential candidates

January 12, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Women tend to downplay their gender in workplaces with masculinity contest cultures

Young people show posttraumatic growth after losing a parent, finding strength, meaning, and appreciation for life

MDMA-assisted therapy shows promise for long-term depression relief

Neuroscience study reveals that familiar rewards trigger motor preparation before a decision is made

Emotional abuse predicts self-loathing more strongly than other childhood traumas

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

Preschool gardening helps young children eat better and stay active

FDA-cleared brain stimulation device fails to beat placebo in ADHD trial

RSS Psychology of Selling

  • The science behind why accessibility drives revenue in the fashion sector
  • How AI and political ideology intersect in the market for sensitive products
  • Researchers track how online shopping is related to stress
  • New study reveals why some powerful leaders admit mistakes while others double down
  • Study reveals the cycle of guilt and sadness that follows a FOMO impulse buy
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy