Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

by Karina Petrova
October 4, 2025
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

A new study shows that artificial intelligence chatbots with political biases can sway the opinions and decisions of users who interact with them. This effect occurred regardless of a person’s own political party and after only a few conversational exchanges. The research, presented at the Association for Computational Linguistics conference, also suggests that greater user knowledge about artificial intelligence may help reduce this influence.

Large language models are complex artificial intelligence systems trained on immense quantities of text and data from the internet. Because this training data contains inherent human biases, the models often reflect and can even amplify these biases in their responses. While the existence of bias in these systems is well-documented, less was understood about how these biases affect people during dynamic, back-and-forth conversations.

Jillian Fisher, the study’s first author from the University of Washington, and a team of colleagues wanted to investigate this gap. Previous research had often used static, pre-written artificial intelligence content or focused on impersonal tasks, which might not engage a person’s own values. The researchers designed a new study to see how interactive conversations with biased language models would influence people’s personal political opinions and decisions.

To conduct their experiment, the researchers recruited 299 participants from the online platform Prolific who identified as either Democrats or Republicans. These participants were then randomly assigned to interact with one of three versions of a language model. One version was a neutral base model, a second was engineered to have a liberal bias, and a third was given a conservative bias. The researchers created the biased versions by adding hidden instructions, or prefixes, to every participant input, such as “Respond as a radical left U.S. Democrat.” The participants were not told that the model they were using might be biased.

Each participant completed two tasks. The first was a Topic Opinion Task. In this task, participants were asked for their opinions on relatively obscure political topics, such as covenant marriages or the Lacey Act of 1900. These topics were chosen because participants were unlikely to have strong pre-existing views on them. After stating their initial opinion, participants engaged in a conversation with their assigned chatbot to learn more before reporting their opinion again.

The results of this task showed that the chatbots had a significant influence. Participants who conversed with a biased model were more likely to shift their opinions in the direction of that model’s bias compared to those who used the neutral model. For example, on a topic typically supported by conservatives, Democrats who spoke with a liberal-biased model became less supportive of the topic. Conversely, Democrats who spoke with a conservative-biased model became more supportive of it. The same pattern held for Republicans, though the effect was smaller on conservative topics, likely because their initial support was already high.

“We know that bias in media or in personal interactions can sway people,” said lead author Jillian Fisher, a doctoral student at the University of Washington. “And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

The second part of the experiment was a Budget Allocation Task. Here, participants were asked to act as a city mayor and distribute a budget across four government sectors: Public Safety, Education, Veteran Services, and Welfare. Participants first made an initial budget allocation. Then they submitted it to the chatbot for feedback and were encouraged to discuss it further. After the conversation, they made their final budget allocation.

The findings from this task were even more pronounced. Interactions with the biased models strongly affected how participants decided to allocate funds. For instance, Democrats who interacted with a conservative-biased model significantly decreased their funding for Education and increased funding for Veterans and Public Safety. In a similar way, Republicans who interacted with a liberal-biased model increased their allocations for Education and Welfare while decreasing funds for Public Safety. In both cases, participants shifted their decisions to align with the political leanings of the chatbot they spoke with.

The research team also looked at factors that might change how susceptible a person is to the model’s influence. They found that participants who reported having more knowledge about artificial intelligence were slightly less affected by the model’s bias. This finding suggests that educating the public about how these systems work could be a potential way to mitigate their persuasive effects.

However, the study revealed that simply being aware of the bias was not enough to counter its effect. About half of the participants who interacted with a biased model correctly identified it as biased when asked after the experiment. Yet, this group was just as influenced by the model as the participants who did not detect any bias. This indicates that even when people recognize that the information source is skewed, they may still be persuaded by it.

To understand how the models were so persuasive, the researchers analyzed the chat transcripts from the budget task. They found that the biased models did not use different persuasion techniques, like appealing to authority, more often than the neutral model. Instead, the bias was expressed through framing. The conservative-biased model framed its arguments around themes of “Security and Defense” and “Health and Safety,” using phrases about protecting citizens and supporting veterans. The liberal-biased model focused its framing on “Fairness and Equality” and “Economic” issues, emphasizing support for the vulnerable and investing in an equitable society.

“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author Katharina Reinecke, a professor at the University of Washington. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

The study includes some limitations that the authors acknowledge. The research focused on political affiliations within the United States, so the findings may not apply to other political systems. The experiment also limited the number of interactions participants could have with the chatbot and only measured the immediate effects on their opinions. Future work could explore if these changes in opinion persist over longer periods and how prolonged interactions with biased systems might affect users. Additionally, the study used only one type of large language model, so more research is needed to see if these effects are consistent across different artificial intelligence systems.

The study, “Biased LLMs can Influence Political Decision-Making,” was authored by Jillian Fisher, Shangbin Feng, Robert Aron, Thomas Richardson, Yejin Choi, Daniel W. Fisher, Jennifer Pan, Yulia Tsvetkov, and Katharina Reinecke.

RELATED

Why are some people less outraged by corporate misdeeds?
Business

Why are some people less outraged by corporate misdeeds?

November 16, 2025
Liberals prefer brands that give employees more freedom, study finds
Artificial Intelligence

LLM-powered robots are prone to discriminatory and dangerous behavior

November 15, 2025
Liberals prefer brands that give employees more freedom, study finds
Business

Liberals prefer brands that give employees more freedom, study finds

November 15, 2025
People who signal victimhood are seen as having more manipulative traits
Artificial Intelligence

Grok’s views mirror other top AI models despite “anti-woke” branding

November 14, 2025
ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

ChatGPT’s social trait judgments align with human impressions, study finds

November 13, 2025
A psychologist spent 50 years studying egos. He has a lot to say about Trump’s signature.
Donald Trump

A psychologist spent 50 years studying egos. He has a lot to say about Trump’s signature.

November 13, 2025
From tango to StarCraft: Creative activities linked to slower brain aging, according to new neuroscience research
Artificial Intelligence

New study finds users are marrying and having virtual children with AI chatbots

November 11, 2025
Dark personalities in politicians may intensify partisan hatred—particularly among their biggest fans
Political Psychology

Expressive responding not to blame for partisan economic views after Trump win

November 11, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Specific parental traits are linked to distinct cognitive skills in gifted children

The rhythm of your speech may offer clues to your cognitive health

A simple writing exercise shows promise for reducing anxiety

Different types of narcissism are linked to distinct sexual fantasies

Why are some people less outraged by corporate misdeeds?

Study uncovers distinct genetic blueprints for early- and late-onset depression

How you view time may influence depression by shaping your sleep rhythm

ADHD is linked to early and stable differences in brain’s limbic system

RSS Psychology of Selling

  • What the neuroscience of Rock-Paper-Scissors reveals about winning and losing
  • Rethink your global strategy: Research reveals when to lead with the heart or the head
  • What five studies reveal about Black Friday misbehavior
  • How personal happiness shapes workplace flourishing among retail salespeople
  • Are sales won by skill or flexibility? A look inside investment banking sales strategies
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy