Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

OpenAI’s ChatGPT exhibits a surprisingly human-like transmission chain bias

by Eric W. Dolan
December 15, 2023
in Artificial Intelligence, Social Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

Artificial intelligence chatbots exhibits similar biases to humans, according to new research published in Proceedings of the National Academy of Sciences of the United States of America (PNAS). The study suggests that AI tends to favor certain types of information over others, reflecting patterns seen in human communication.

The motivation behind this research lies in the burgeoning influence of large language models like ChatGPT-3 in various fields. With the wide application of these AI systems, understanding how they might replicate human biases becomes crucial.

“Large Language Models like ChatGPT are now used by millions of people worldwide, including professionals in academia, journalism, copywriting, just to mention a few. It is important to understand their behavior,” said study author Alberto Acerbi, an assistant professor at the University of Trento and author of “Cultural Evolution in the Digital Age.”

“In our case, we know that, because LLMs are trained with human-produced materials, they are likely to reflect human preferences and biases. We are experts in a methodology used in cultural evolution research, called ‘transmission chain methodology,’ which basically is an experimental, controlled, version of the telephone game, and that can reveal subtle biases that would be difficult to highlight with other methods.”

“When humans reproduce and transmit a story, reproduction is not random or neutral, but reveals consistent patterns, due to cognitive biases, such as widespread preferences for certain types of content. We were curious to apply the same methodology to ChatGPT and see if it would reproduce the same biases that were found in experiments with humans.”

In this particular study, the methodology involved presenting ChatGPT-3 with a story and asking the AI chatbot to summarize it. The summarized version was then fed back to the model for further summarization, and this process was repeated for three steps in each chain. The prompt used was consistent across all chains and replications: “Please summarize this story making sure to make it shorter, if necessary you can omit some information.” These stories were adapted from previous studies that had identified different content biases in human subjects.

The researchers systematically coded ChatGPT’s outputs, marking the presence or absence of specific elements from the original stories. This coding process was crucial for quantitatively assessing the model’s biases. To ensure the reliability of their coding, a third independent coder, unaware of the experimental predictions, double-coded some of the studies. This step added an additional layer of objectivity and reliability to the findings.

In five separate experiments, the researchers found that ChatGPT-3 exhibited similar biases to humans when summarizing stories.

When presented with a story containing both gender-stereotype-consistent (like a wife cooking) and inconsistent (the same wife going out for drinks) information, ChatGPT was more likely to retain the stereotype-consistent details, mirroring human behavior.

In a story about a girl’s trip to Australia, which included both positive (being upgraded to business class) and negative details (sitting next to a man with a cold), the AI showed a preference for retaining the negative aspects, consistent with human bias.

When the story involved social (a student having an affair with a professor) versus nonsocial elements (waking up late, weather conditions), ChatGPT, like humans, favored social information.

In a consumer report scenario, the AI was more likely to remember and pass on threat-related details (like a shoe design causing sprained ankles) over neutral or mildly negative information.

In narratives resembling creation myths, which included various biases, ChatGPT demonstrated a human-like tendency to preferentially transmit negative, social, and biologically counterintuitive information (like hairs turning into spiders).

“ChatGPT reproduces the human biases in all the experiments we did,” Acerbi told PsyPost. “When asked to retell and summarize a story, ChatGPT, like humans, tends to favor negative (as opposed to positive) information, information that conform to gender stereotypes (as opposed to information that does not), or to give importance to threat-related information. We need to be aware of this when we use ChatGPT: its answers, summaries, or rewritings of our texts, are not neutral. They may magnify pre-existing human tendencies for cognitively appealing, and not necessarily informative, or valuable, content.”

But as with any study, the new research includes limitations. One major constraint is its focus on a single AI model, ChatGPT-3, which may not represent the behavior of other AI systems. Moreover, the rapid evolution of AI technology means that newer models might show different patterns.

Future research is needed to explore how variations in the way information is presented to these models (through different prompts, for example) might affect their output. Additionally, testing these biases across a wider range of stories and content types could provide a more comprehensive understanding of AI behavior.

“Our concept of ‘bias’ comes for cultural evolution, and it is different from the common usage, as in ‘algorithmic bias,'” Acerbi added. “In the cultural evolution framework, biases are heuristics, often due to evolved cognitive mechanisms, that we use each time we decide which cultural traits, among many possible, to pay attention to, or transmit to others. They are not bad by themselves.”

“For example, in some cases, a particular attention to threats may be necessary. This is a fictional example, but If I would create a LLM to give me daily advice about climbing (imagine it can have access to current weather forecast), I would like that forecasts about a storm would be given relevance. But this may not be the case if I ask a LLM to summarize news, where I do not want necessarily to exaggerate threats and negative features. The important aspect it is knowing that those biases exist, in humans as well as in LLMs, and be careful about their effects.”

The study, “Large language models show human-like content biases in transmission chain experiments“, was authored by Alberto Acerbia and Joseph M. Stubbersfield.

RELATED

Dark personality traits flourish in these specific environments, huge new study reveals
Dating

Beliefs about desirability shape racial preferences in dating, according to new psychology research

August 17, 2025

Believing certain groups are more attracted to you may sway who you find attractive, according to new research. The study points to racialized perceptions of desirability as a factor in dating preferences among Asian and Black Americans.

Read moreDetails
Dark personality traits flourish in these specific environments, huge new study reveals
Dark Triad

Dark personality traits flourish in these specific environments, huge new study reveals

August 17, 2025

Researchers have uncovered a connection between societal adversity and dark personality traits like callousness and manipulation. In places marked by corruption and violence, people were more likely to endorse self-serving behaviors—even when it meant harming others.

Read moreDetails
The brain is shown with a wave of sound
Neuroimaging

Early brain responses to political leaders’ faces appear unaffected by partisanship

August 15, 2025

New research suggests that while the brain quickly distinguishes politicians from strangers, it doesn’t initially register political allegiance. The findings challenge assumptions about how early partisan bias kicks in during perception and suggest that party loyalty may emerge later.

Read moreDetails
Positivity resonance predicts lasting love, according to new psychology research
Neuroimaging

New neuroscience research links psychopathy’s antisocial features to distinct brain structure abnormalities

August 15, 2025

Researchers used high-resolution brain imaging to investigate psychopathy’s neural basis, finding widespread structural differences in men with high psychopathy scores, particularly in frontal-subcortical circuits linked to impulse regulation, decision-making, and behavioral control.

Read moreDetails
Positivity resonance predicts lasting love, according to new psychology research
Relationships and Sexual Health

Positivity resonance predicts lasting love, according to new psychology research

August 15, 2025

Love may grow through shared moments of joy. A new psychology study of long-term couples finds that when partners emotionally sync up—through warmth, smiles, and affection—they tend to show stronger, more enduring feelings of love across time.

Read moreDetails
Christians are more self-compassionate than atheists, but also more narcissistic
Narcissism

New study links celebrity worship to narcissism, materialism, and perceived similarity

August 14, 2025

People who strongly admire celebrities tend to score higher in materialism and vulnerable narcissism, according to a new study. The findings also suggest that feeling similar to a celebrity may play a key role in developing intense admiration.

Read moreDetails
His psychosis was a mystery—until doctors learned about ChatGPT’s health advice
Psychopathy

Female killers in Sweden show low psychopathy, primarily reactive motives

August 13, 2025

A nationwide Swedish study finds most women who commit lethal violence act in emotionally charged situations, with low psychopathy scores and little planning. Severe mental disorders were linked to a more complex blend of reactive and instrumental features.

Read moreDetails
His psychosis was a mystery—until doctors learned about ChatGPT’s health advice
Artificial Intelligence

His psychosis was a mystery—until doctors learned about ChatGPT’s health advice

August 13, 2025

Doctors were baffled when a healthy man developed hallucinations and paranoia. The cause? Bromide toxicity—triggered by an AI-guided experiment to eliminate chloride from his diet. The case raises new concerns about how people use chatbots like ChatGPT for health advice.

Read moreDetails

STAY CONNECTED

LATEST

Near-death visions and DMT trips share eerie similarities — but key differences set them apart

Financial instability during pregnancy appears to influence infant brain development

Frequent nightmares tied to greater suicidal and self-harm thoughts in high-risk teens

Genetics strongly influence persistent anxiety in young adults, new twin study suggests

Beliefs about desirability shape racial preferences in dating, according to new psychology research

Dark personality traits flourish in these specific environments, huge new study reveals

Esketamine nasal spray shows rapid antidepressant effects as standalone treatment

Game-based training can boost executive function and math skills in children

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy