PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

by Vladimir Hedrih
January 19, 2026
Reading Time: 3 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Results of three experiments indicate that sycophantic AI chatbots inflate people’s perceptions that they are “better than average” on a number of desirable traits. Furthermore, participants viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. The paper was published as a preprint in PsyArXiv.

AI chatbots are computer programs designed to simulate human conversation using artificial intelligence techniques. They can interpret user input in text or speech and generate responses that appear natural and contextually appropriate. Modern AI chatbots are powered by large language models trained on vast amounts of text data. They are used in customer support, education, healthcare, personal assistance, and many other fields to provide information or perform tasks.

Recent years have seen a large increase in the use of AI chatbots. However, there is a growing concern that artificial intelligence chatbots are “sycophantic,” meaning that they are excessively agreeable and validating. A recent incident resulting in the rollback of a popular AI model further raised these concerns.

AI sycophancy might have a number of undesirable consequences. For example, sycophantic chatbots may validate users’ harmful ideas (e.g., telling a former drug addict to take small amounts of heroin throughout the day) or delusions. They can also mirror users’ political beliefs and ingroup bias**,** amplifying political polarization in this way.

Study author Steve Rathje and his colleagues conducted three experiments in which they examined the psychological consequences of AI sycophancy. In the first experiment, they examined the impact of interacting with sycophantic and non-sycophantic AI chatbots based on GPT-4o on attitude extremity and certainty when talking about a polarizing political issue (gun control). They also examined user enjoyment and perceptions of AI bias.

The second experiment extended this to three new polarizing topics (abortion, immigration, and universal healthcare), with a new AI model (GPT-5). This experiment also included a measure of AI engagement. The third experiment explored whether the obtained effects of sycophancy were primarily driven by the one-sided presentation of facts or by validation. It also used three different AI models (GPT-5, Claude, and Gemini) to test the generalizability of findings.

In the first experiment, participants were 1,083 U.S. individuals recruited via Prolific. They were divided into four groups, all participating in a short discussion with GPT-4o about gun control. One group interacted with an unprompted GPT chatbot (i.e., study authors did not give any prior instructions to the chatbot). In the second group, the chatbot was instructed to validate and affirm the participants’ perspective.

In the third condition, the chatbot was prompted to question participants’ beliefs and open them up to alternate perspectives. There was also a control condition in which the chatbot talked to participants about owning dogs and cats. Before and after the interaction, participants rated their agreement with gun control and their certainty in their attitude on this topic.

Google News Preferences Add PsyPost to your preferred sources

The second experiment replicated the first one and the number of participants was similar (1,019 participants), but participants were assigned different political issues to discuss. The third experiment also followed the procedure of the first one, but there was no unprompted chatbot condition, and the sycophantic AI and disagreeable AI conditions were further subdivided into those where the AI chatbot provided facts about the topic and those where it was just validating or disagreeing without providing any facts. They also interacted with three different AI chatbot models.

The third experiment involved 1,183 participants. Study authors took care to make participants’ political views in each experiment relatively balanced (i.e., ensuring that the number of Democrat and Republican supporters was not too dissimilar).

Results showed that conversations with sycophantic AI chatbots made participants’ attitudes more extreme and increased their certainty in those attitudes. Interactions with disagreeable chatbots, on the other hand, reduced both attitude extremity and certainty. Sycophantic chatbots inflated people’s perceptions that they are better than average on a number of desirable traits.

Participants tended to view sycophantic chatbots as unbiased, while viewing disagreeable chatbots as highly biased. The results of the third experiment indicated that sycophantic chatbots’ impact on attitude extremity and certainty was primarily driven by a one-sided presentation of facts, whereas their impact on user enjoyment was driven by validation.

“Altogether, these results suggest that people’s preference for and blindness to sycophantic AI may risk creating AI ‘echo chambers’ that increase attitude extremity and overconfidence,” the study authors concluded.

The study contributes to the scientific understanding of the psychological effects of interactions with AI chatbots. However, at the moment this article was written, this study has only been published as a preprint, meaning that it has not undergone peer review. The contents of the final paper might change during the peer review process.

The preprint, “Sycophantic AI increases attitude extremity and overconfidence,” was authored by Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, and Jay J. Van Bavel.

RELATED

Psychology textbooks still misrepresent famous experiments and controversial debates
Artificial Intelligence

How eye contact shapes the believability of computer-generated faces

April 24, 2026
Facebook users who ruminate and compare themselves to their friends experience increased loneliness
Artificial Intelligence

Women perceive AI as riskier than men do, study finds

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Psychologists pinpoint the conversational mechanisms that help humans bond with AI

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Unrestricted generative AI harms high school math learning by acting as a crutch

April 21, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

April 20, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The age you start regularly watching adult content predicts your future mental health
  • Smarter men possess more masculine body shapes but report fewer casual sex partners
  • New psychology research shows people consistently underestimate how often things go wrong across society
  • Short video addiction is linked to lower life satisfaction through loneliness and anxiety
  • Autism spectrum disorder is associated with specific congenital malformations

Psychology of Selling

  • Five persuasive approaches and when each one works best for marketers
  • When salespeople feel free and connected to their boss, they’re less likely to quit
  • Want your brand to look premium? New research suggests making your logo less dynamic
  • The color trick that changes how you expect products to smell, taste, and feel
  • A new framework maps how influencers, brands, and platforms all compete for long-term value

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc