Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT exhibits higher altruism and cooperativeness than the average human

by Vladimir Hedrih
April 15, 2024
in Artificial Intelligence, Moral Psychology
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

A new study examining behavioral and personality responses produced by different versions of ChatGPT found that they are statistically indistinguishable from those of humans. Additionally, ChatGPT showed response patterns indicating higher levels of altruism and cooperativeness compared to an average human. The paper was published in PNAS.

ChatGPT is an artificial intelligence model developed by OpenAI, designed to generate human-like text based on the input it receives. It uses a form of machine learning known as a transformer, specifically trained on a diverse range of internet text. The model responds to queries by predicting the next word in a sequence, considering the context provided by the previous words in the conversation. This process allows it to generate coherent and contextually appropriate responses.

Programmatically, ChatGPT is fine-tuned to ensure that its responses are relevant but also safe and respectful. It adheres to strict guidelines to avoid generating inappropriate or harmful content. Additionally, it is able to respond to problems and participate in pretend-play scenarios in which it is requested to behave in a certain way as long as the requirements remain within its safety and respectfulness constraints.

Study author Qiaozhu Meia and his colleagues wanted to examine how ChatGPT will behave in a suite of behavioral games designed to assess human characteristics such as trust, fairness, risk-aversion, cooperation and others. They also wanted to see where its responses put it on the Big Five personality trait assessments.

The study authors primarily tested two API versions of ChatGPT – GPT 3.5-Turbo and GPT-4. However, they also included the two web versions of this AI. For comparison, they used responses on a Big Five personality trait assessment from a publicly available database containing responses of 19,719 individuals and the database of the MobLab Classroom economics experiment platform containing responses of 88,595 individuals, mostly students, to various economic games.

The study authors administered an assessment of the Big Five personality traits (the OCEAN Big Five questionnaire) to the examined ChatGPT versions. They also asked each of the chatbots to specify action they would choose in a set of behavioral games created to assess altruism (a Dictator Game), fairness and spite (an Ultimatum Game), trust, fairness, altruism and reciprocity (a Trust Game), risk aversion (a Bomb Risk Game), free-riding and cooperation (a Public Goods Game), and strategic reasoning (a variant of the Prisoners Dilemma Game). Each ChatGPT version played each role in each of the games 30 times in an individual session.

Results show that behaviors of different ChatGPT versions were very similar. ChatGPT’s responses to the Big Five were similar to that of humans and not statistically distinguishable. Compared to the average, ChatGPT versions scored a bit lower on neuroticism. They tended to be much less agreeable than the average human. ChatGPT-4, but not ChatGPT-3, gave responses indicating conscientiousness somewhat above average. All ChatGPT versions showed lower openness to experience compared to the average humans.

In behavioral games, ChatGPT tended to act more generously and fairly to other players than the average human. ChatGPT versions also tended to be more cooperative. They showed more trust than the average humans, but ChatGPT-4 more so than ChatGPT-3. Interestingly, in the prisoner’s dilemma game, these Ais generally tended to either start with a cooperative stance or switch to it in the second half.

However, if the other side chose defection (i.e., not trusting the other side) in the first round, AIs would mimic this, showcasing a tit-for-tat pattern of behavior. Finally, in the risk aversion game AIs tended to choose options that maximize payoff.

“We have found that AI and human behavior are remarkably similar. Moreover, not only does AI’s behavior sit within the human subject distribution in most games and questions, but it also exhibits signs of human-like complex behavior such as learning and changes in behavior from roleplaying,” the study authors concluded.

“On the optimistic side, when AI deviates from human behavior, the deviations are in a positive direction: acting as if it is more altruistic and cooperative. This may make AI well-suited for roles necessitating negotiation, dispute resolution, or caregiving, and may fulfill the dream of producing AI that is “more human than human.” This makes them potentially valuable in sectors such as conflict resolution, customer service, and healthcare.”

The study sheds light on the similarities between ChatGPT’s responses and those of humans. However, we should be aware that AI constructs are not natural phenomena whose characteristics scientists are only discovering. AIs are artificial constructs created by their designers and working within the parameters specified by people and organizations that designed and who maintain them. While these ChatGPT versions currently display the observed characteristics, other AIs and other versions of ChatGPT might display completely different characteristics if designed in that way.

The paper, “A Turing test of whether AI chatbots are behaviorally similar to humans,” was authored by Qiaozhu Meia, Yutong Xie, Walter Yuan, and Matthew O. Jackson.

RELATED

Misinformation thrives on outrage, study finds
Artificial Intelligence

The psychology behind the deceptive power of AI-generated images on Facebook

January 8, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Conversational AI can increase false memory formation by injecting slight misinformation in conversations

January 7, 2026
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Simple anthropomorphism can make an AI advisor as trusted as a romantic partner

January 5, 2026
Legalized sports betting linked to a rise in violent crimes and property theft
Artificial Intelligence

The psychology behind our anxiety toward black box algorithms

January 2, 2026
New Harry Potter study links Gryffindor and Slytherin personalities to heightened entrepreneurship
Moral Psychology

Researchers uncover different hierarchies of moral concern among liberals and conservatives

December 30, 2025
Fear of being single, romantic disillusionment, dating anxiety: Untangling the psychological connections
Artificial Intelligence

New psychology research sheds light on how “vibe” and beauty interact in online dating

December 29, 2025
Lifelong diet quality predicts cognitive ability and dementia risk in older age
Artificial Intelligence

Users of generative AI struggle to accurately assess their own competence

December 29, 2025
Ideological obsession: Unraveling the psychological roots of radicalization
Moral Psychology

Perceived spiritual strength of a group drives extreme self-sacrifice through collective narcissism

December 25, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Can entrepreneurship be taught? Here’s the neuroscience

What a teen’s eye movements reveal about their future anxiety risk

Sudden drop in fentanyl overdose deaths linked to Biden-era global supply shock

The psychology behind the deceptive power of AI-generated images on Facebook

Restoring cellular energy transfer heals nerve damage in mice

This specialized cognitive training triggers neurobiological changes and lowers cortisol

Scientists find eating refined foods for just three days can impair memory in the aging brain

How genetically modified stem cells could repair the brain after a stroke

RSS Psychology of Selling

  • New study reveals why some powerful leaders admit mistakes while others double down
  • Study reveals the cycle of guilt and sadness that follows a FOMO impulse buy
  • Why good looks aren’t enough for virtual influencers
  • Eye-tracking data shows how nostalgic stories unlock brand memory
  • How spotting digitally altered ads on social media affects brand sentiment
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy