Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

by Vladimir Hedrih
January 19, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Results of three experiments indicate that sycophantic AI chatbots inflate people’s perceptions that they are “better than average” on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper was published as a preprint in PsyArXiv.

AI chatbots are computer programs designed to simulate human conversation using artificial intelligence techniques. They can interpret user input in text or speech and generate responses that appear natural and contextually appropriate. Modern AI chatbots are powered by large language models trained on vast amounts of text data. They are used in customer support, education, healthcare, personal assistance, and many other fields to provide information or perform tasks.

Recent years have seen a large increase in the use of AI chatbots. However, there is a growing concern that artificial intelligence chatbots are “sycophantic,” meaning that they are excessively agreeable and validating. A recent incident resulting in the rollback of a popular AI model further raised these concerns.

AI sycophancy might have a number of undesirable consequences. For example, sycophantic chatbots may validate users’ harmful ideas (e.g., telling a former drug addict to take small amounts of heroin throughout the day) or delusions. They can also mirror users’ political beliefs and ingroup bias**,** amplifying political polarization in this way.

Study author Steve Rathje and his colleagues conducted three experiments in which they examined the psychological consequences of AI sycophancy. In the first experiment, they examined the impact of interacting with sycophantic and non-sycophantic AI chatbots based on GPT-4o on attitude extremity and certainty when talking about a polarizing political issue (gun control). They also examined user enjoyment and perceptions of AI bias.

The second experiment extended this to three new polarizing topics (abortion, immigration, and universal healthcare), with a new AI model (GPT-5). This experiment also included a measure of AI engagement. The third experiment explored whether the obtained effects of sycophancy were primarily driven by the one-sided presentation of facts or by validation. It also used three different AI models (GPT-5, Claude, and Gemini) to test the generalizability of findings.

In the first experiment, participants were 1,083 U.S. individuals recruited via Prolific. They were divided into four groups, all participating in a short discussion with GPT-4o about gun control. One group interacted with an unprompted GPT chatbot (i.e., study authors did not give any prior instructions to the chatbot). In the second group, the chatbot was instructed to validate and affirm the participants’ perspective.

In the third condition, the chatbot was prompted to question participants’ beliefs and open them up to alternate perspectives. There was also a control condition in which the chatbot talked to participants about owning dogs and cats. Before and after the interaction, participants rated their agreement with gun control and their certainty in their attitude on this topic.

Google News Preferences Add PsyPost to your preferred sources

The second experiment replicated the first one and the number of participants was similar (1,019 participants), but participants were assigned different political issues to discuss. The third experiment also followed the procedure of the first one, but there was no unprompted chatbot condition, and the sycophantic AI and disagreeable AI conditions were further subdivided into those where the AI chatbot provided facts about the topic and those where it was just validating or disagreeing without providing any facts. They also interacted with three different AI chatbot models.

The third experiment involved 1,183 participants. Study authors took care to make participants’ political views in each experiment relatively balanced (i.e., ensuring that the number of Democrat and Republican supporters was not too dissimilar).

Results showed that conversations with sycophantic AI chatbots made participants’ attitudes more extreme and increased their certainty in those attitudes. Interactions with disagreeable chatbots, on the other hand, reduced both attitude extremity and certainty. Sycophantic chatbots inflated people’s perceptions that they are better than average on a number of desirable traits.

Participants tended to view sycophantic chatbots as unbiased, while viewing disagreeable chatbots as highly biased. The results of the third experiment indicated that sycophantic chatbots’ impact on attitude extremity and certainty was primarily driven by a one-sided presentation of facts, whereas their impact on user enjoyment was driven by validation.

“Altogether, these results suggest that people’s preference for and blindness to sycophantic AI may risk creating AI ‘echo chambers’ that increase attitude extremity and overconfidence,” the study authors concluded.

The study contributes to the scientific understanding of the psychological effects of interactions with AI chatbots. However, at the moment this article was written, this study has only been published as a preprint, meaning that it has not undergone peer review. The contents of the final paper might change during the peer review process.

The preprint, “Sycophantic AI increases attitude extremity and overconfidence,” was authored by Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, and Jay J. Van Bavel.

RELATED

ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

AI chatbots generate weight loss coaching messages perceived as helpful as human-written advice

February 16, 2026
Scientists use machine learning to control specific brain circuits
Artificial Intelligence

Scientists use machine learning to control specific brain circuits

February 14, 2026
Younger women find men with beards less attractive than older women do
Artificial Intelligence

Bias against AI art is so deep it changes how viewers perceive color and brightness

February 13, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

AI boosts worker creativity only if they use specific thinking strategies

February 12, 2026
Psychology study sheds light on the phenomenon of waifus and husbandos
Artificial Intelligence

Psychology study sheds light on the phenomenon of waifus and husbandos

February 11, 2026
How people end romantic relationships: New study pinpoints three common break up strategies
Artificial Intelligence

Psychology shows why using AI for Valentine’s Day could be disastrous

February 9, 2026
Artificial intelligence predicts adolescent mental health risk before symptoms emerge
Artificial Intelligence

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

February 7, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

February 7, 2026

STAY CONNECTED

LATEST

Surprising new research links LSD-induced brain entropy to seizure protection

Scientists have found a fascinating link between breathing and memory

Childhood trauma changes how the brain processes caregiver cues

AI chatbots generate weight loss coaching messages perceived as helpful as human-written advice

Cognitive flexibility mediates the link between romance and marriage views

Low-dose psilocybin reduces weight gain and hyperglycemia in mice fed obesogenic diet

Standard mental health tests may be inaccurate for highly intelligent people

New sexting study reveals an “alarming” reality for teens who share explicit images

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc