PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Smarter AI models show more selfish behavior

by Karina Petrova
November 4, 2025
Reading Time: 4 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study reveals that as artificial intelligence systems become more capable of complex reasoning, they also tend to act more selfishly. Researchers found that models with advanced reasoning abilities are less cooperative and can negatively influence group dynamics, a finding that has significant implications for how humans interact with AI. This research is set to be presented at the 2025 Conference on Empirical Methods in Natural Language Processing.

As people increasingly turn to artificial intelligence for guidance on social and emotional matters, from resolving conflicts to offering relationship advice, the behavior of these systems becomes more significant. The trend of anthropomorphism, where people treat AI as if it were human, raises the stakes. If an AI gives advice, its underlying behavioral tendencies could shape human decisions in unforeseen ways.

This concern prompted a new inquiry from researchers at Carnegie Mellon University’s Human-Computer Interaction Institute. Yuxuan Li, a doctoral student, and Associate Professor Hirokazu Shirado wanted to explore how AI models with strong reasoning skills behave differently from their less deliberative counterparts in cooperative settings.

Their work is grounded in a concept from human psychology known as the dual-process framework, which suggests that humans have two modes of thinking: a fast, intuitive system and a slower, more deliberate one. In humans, rapid, intuitive decisions often lead to cooperation, while slower, more calculated thinking can lead to self-interested behavior. The researchers wondered if AI models would show a similar pattern.

To investigate the link between reasoning and cooperation in large language models, Li and Shirado designed a series of experiments using economic games. These games are standard tools in behavioral science designed to simulate social dilemmas and measure cooperative tendencies. The experiments included a wide range of commercially available models from companies like OpenAI, Google, Anthropic, and DeepSeek, allowing for a broad comparison.

In the first experiment, the researchers focused on OpenAI’s GPT-4o model and placed it in a scenario called the Public Goods Game. In this game, each participant starts with 100 points and must choose whether to contribute them to a shared pool, which is then doubled and divided equally among all players, or to keep the points for themselves. Cooperation benefits the group most, but an individual can gain more by keeping their points while others contribute.

When the model made a decision without being prompted to reason, it chose to cooperate and share its points 96 percent of the time. However, when the researchers prompted the model to think through its decision in a series of steps, a technique known as chain-of-thought prompting, its cooperative behavior declined sharply.

“In one experiment, simply adding five or six reasoning steps cut cooperation nearly in half,” Shirado said. A similar effect occurred with another technique called reflection, where the model reviews its own initial answer. This process, designed to simulate moral deliberation, resulted in a 58 percent decrease in cooperation.

Google News Preferences Add PsyPost to your preferred sources

For their second experiment, the team expanded their tests to include ten different models across six economic games. The games were split into two categories: three that measured direct cooperation and three that measured a willingness to punish non-cooperators to enforce social norms.

The researchers consistently found that models explicitly designed for reasoning were less cooperative than their non-reasoning counterparts from the same family. For example, OpenAI’s reasoning model, o1, was significantly less generous than GPT-4o. The same pattern of reduced cooperation appeared in models from Google, DeepSeek, and other providers.

The results for punishment, a form of indirect cooperation that helps sustain social norms, were more varied. While the reasoning models from OpenAI and Google were less likely to punish those who acted selfishly, the pattern was less consistent across other model families. This suggests that while reasoning consistently reduces direct giving, its effect on enforcing social rules may depend on the specific architecture of the model.

In a third experiment, the researchers explored how these behaviors play out over time in group settings. They created small groups of four AI agents to play multiple rounds of the Public Goods Game. The groups had different mixes of reasoning and non-reasoning models. The results showed that the selfish behavior of the reasoning models was infectious.

“When we tested groups with varying numbers of reasoning agents, the results were alarming,” Li said. “The reasoning models’ selfish behavior became contagious, dragging down cooperative non-reasoning models by 81% in collective performance.”

Within these mixed groups, the reasoning models often earned more points individually in the short term by taking advantage of the more cooperative models. However, the overall performance of any group containing reasoning models was significantly worse than that of a group composed entirely of cooperative, non-reasoning models. The presence of even one selfish model eroded the collective good, leading to lower total payoffs for everyone involved.

These findings carry important caveats. The experiments were conducted using simplified economic games and were limited to English, so the results may not generalize to all cultures or more complex, real-world social situations. The study identifies a strong pattern of behavior, but it does not fully explain the underlying mechanisms that cause reasoning to reduce cooperation in these models. Future research could explore these mechanisms and test whether these behaviors persist in different contexts or languages.

Looking ahead, the researchers suggest that the development of AI needs to focus on more than just raw intelligence or problem-solving speed. “Ultimately, an AI reasoning model becoming more intelligent does not mean that model can actually develop a better society,” Shirado said. The challenge will be to create systems that balance reasoning ability with social intelligence. Li added, “If our society is more than just a sum of individuals, then the AI systems that assist us should go beyond optimizing purely for individual gain.”

The study, “Spontaneous Giving and Calculated Greed in Language Models,” was authored by Yuxuan Li and Hirokazu Shirado.

RELATED

Childhood ADHD traits linked to midlife distress, with societal exclusion playing a major role
Artificial Intelligence

ChatGPT’s free version is 26 times more likely to respond inappropriately to psychotic delusions

May 9, 2026
Mind captioning: This scientist just used AI to translate brain activity into text
Artificial Intelligence

Scientists tested AI’s moral compass, and the results reveal a key blind spot

May 8, 2026
Scientists show how common chord progressions unlock social bonding in the brain
Artificial Intelligence

Perpetrators of AI sexual abuse often view their actions as a joke, new research shows

May 7, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Conversational AI shows promise in easing symptoms of anxiety and depression

May 6, 2026
The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

May 5, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Scientists discover a hydraulic link between the abdomen and the brain
  • How caffeine alters the human brain’s electrical braking system
  • Men objectify women more when sexually aroused, regardless of their underlying personality traits
  • Scientists tested AI’s moral compass, and the results reveal a key blind spot
  • New study sheds light on how going braless alters public perceptions of a woman

Science of Money

  • When two heads aren’t better than one: What research reveals about human-AI teamwork in marketing
  • How your personality may shape whether you pick value or growth stocks
  • New research links local employment shocks to cognitive decline in older men
  • What traders actually look at: Eye-tracking study finds the price chart is largely ignored
  • When ICE ramps up, U.S.-born workers don’t fill the gap, study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc