PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Groundbreaking psychology research sheds light on the trust dynamics of human-machine collectives

by Eric W. Dolan
June 20, 2023
Reading Time: 4 mins read
Share on TwitterShare on Facebook

A series of psychological experiments has shown that people’s treatment of machines differs from their treatment of humans, which influences the establishment of trust within human-machine collectives. The new findings provide insights into the dynamics of human-bot interactions and have implications for understanding human behavior in the emerging context of AI systems. The research was published in Nature Communications.

The researchers were interested in studying how humans and bots interact in online communities and how their behavior is influenced by social norms. They wanted to understand the challenges that arise in mixed human-bot collectives, similar to those faced by human societies, such as cooperation, exploitation, and norm stabilization.

“Ever since I was a child, I have always been fascinated by human-bot collectives and their societal implications, inspired by classic movies such as Terminator 2 and Blade Runner,” said study co-author Talal Rahwan, an associate professor of computer science at New York University Abu Dhabi.

“My fascination with these issues grew bigger after I received my PhD in Artificial Intelligence, but I felt that studying human-bot interactions should include the perspective of social scientists and psychologists. With this in mind, I sought out two collaborators, Kinga Makovi (a sociologist) and Jean-François Bonnefon (a psychologist), and together we developed the current study of cooperation and punishment in human-bot collectives.”

To conduct their study, the researchers performed a series of online experiments with a total of 7,917 participants. They created a stylized society (a simplified and artificial representation of a social system) in which participants could take on different roles: Beneficiaries, Helpers, Punishers, and Trustors. The participants played economic games with real financial consequences, which served as proxies for real-life interactions.

In all the experiments, the distinction between human and bot participants was communicated to participants through textual descriptions (referring to humans as “MTurk worker” and bots as “Bot”) and stylized images of robots or people. This information was presented on the screen where participants made their choices.

Humans typically earn trust by sharing and punishing those who don’t share, but the researchers observed that this trust was less pronounced when humans interacted with bots. Sharing or punishing behaviors towards bots didn’t lead to as much trust as when humans interacted with other humans.

In other words, sharing resources with bots resulted in a smaller increase in trust compared to sharing with humans. Similarly, people who did not share with bot beneficiaries were less likely to be punished compared to those who did not share with humans. Bots also did not receive the same level of trust gain as humans when they shared resources. As a result, trust was not easily established in these mixed human-bot communities, which led to worse collective outcomes.

Google News Preferences Add PsyPost to your preferred sources

But the trust gains were not completely eliminated when interacting with bots. This suggests that people carried assumptions about social norms from human societies into these mixed communities, the researchers said.

“It is known that people can signal their trustworthiness to others by acting cooperatively, or by punishing those who do not cooperate with others. We show that that same holds in human-bot collectives, albeit to a lesser extent,” explained co-author Kinga Makovi, an assistant professor at New York University Abu Dhabi.

“More specifically, in our experiments, the trust gained by sharing resources with a bot was less than the trust gained when sharing with a fellow human. We saw something similar when it comes to punishing a bot as opposed to a human. Importantly though, the trust-gains for ‘doing the right thing’ (sharing or punishing those who do not share) were only attenuated, rather than eliminated, suggesting that people carry into human-bot societies similar assumptions about the social norms that they have long relied on within human societies.”

Additionally, the researchers found that when participants were informed about the high consensus regarding the norm of sharing, trust gains generally increased. This suggests that people may alter their behavior once they are made aware of social norms.

“Previous attempts to increase trust and cooperation between humans and bots often tried to make bots look more like humans, but this approach led to disappointing results,” noted co-author Jean-François Bonnefon, a research director at the Toulouse School of Economics.

“We show that there is a better approach: instead of trying to pull bots into the circle of trust by giving them a human appearance, you can nudge people to expand the circle of trust so that it reaches out to nonhuman bots. This is done by making them aware that social norms are shifting, and that many people are starting to think it is a good thing to cooperate with bots, even if they don’t yet realize it is a common opinion.”

The researchers noted that stylized societies with incentivized interactions are useful for studying human cooperation in the lab. However, it may not be as suitable for studying human-bot cooperation since bots have no use for money. Participants recognized that bots didn’t desire money but acted as if they did, possibly because prosocial behavior towards bots is seen as a signal to other humans.

“It is totally understandable that people can signal their trustworthiness by acting cooperatively with fellow humans, but it is surprising that they can also do so by acting cooperatively with bots, or by punishing bots who do not act cooperatively,” Rahwan told PsyPost. “After all, machines do not have emotions or needs, and so one could argue that it is perfectly fine not to share with a bot, or that it is pointless to punish a bot. Yet, our study shows otherwise.”

Like any study, the new research includes some caveats. The sample consisted of online participants, primarily from Amazon Mechanical Turk, who tend to be younger, more educated, and more technologically savvy. The results may not generalize to older or less technologically savvy populations. In addition, the use of a stylized society allowed for experimental control but may yield different results in other contexts.

“Our conclusions are based on an experimental setup that is meant to capture a ‘stylized society’ of humans and bots,” Makovi said. “Will people act differently when facing bots in the field? This remains to be seen.”

“The paper would not have come to fruition without Wendi Li who was an undergraduate student at NYU Abu Dhabi when we started the study, and Anahit Sargsyan who supported the data collection and analysis of the multiple iterations,” Rahwan added.

The study, “Trust within human-machine collectives depends on the perceived consensus about cooperative norms“, was authored by Kinga Makovi, Anahit Sargsyan, Wendi Li, Jean-François Bonnefon, and Talal Rahwan.

RELATED

Newborn brains reveal innate ability to process complex sound patterns
Parenting

Women who out-earn their partners through education face a smaller child penalty

May 12, 2026
COVID-19 lockdowns linked to lasting disruptions in teen brain and body systems
Social Psychology

Does romantic rejection hurt more than platonic rejection? A new study says no

May 12, 2026
Researchers found a specific glitch in how anxious people weigh the future
Political Psychology

Threatening men’s masculinity does not make them more politically conservative, new study finds

May 12, 2026
Blue light exposure may counteract anxiety caused by chronic vibration
Addiction

AI-designed drug reduces fentanyl consumption in animal models by targeting serotonin receptors

May 12, 2026
Researchers observe a surprising moral tendency among impulsive psychopaths
Social Psychology

Jailed immigrants show lower risk for criminal behavior than native-born citizens

May 11, 2026
Scientists challenge The Body Keeps the Score with a new predictive model of trauma
Political Psychology

The psychological traits that build an extremist personality

May 10, 2026
Intense crying in East-Asian infants may reflect cultural norms, not insecure attachment, study suggests
Developmental Psychology

Intense crying in East-Asian infants may reflect cultural norms, not insecure attachment, study suggests

May 9, 2026
Childhood ADHD traits linked to midlife distress, with societal exclusion playing a major role
Artificial Intelligence

ChatGPT’s free version is 26 times more likely to respond inappropriately to psychotic delusions

May 9, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Brooding identified as a major driver of bedtime procrastination, alongside physical markers of stress
  • Scientists challenge The Body Keeps the Score with a new predictive model of trauma
  • Eating at least five eggs a week is associated with a 27 percent lower risk of Alzheimer’s
  • Brain scans reveal how people with autistic traits connect differently
  • Scientists discover a hydraulic link between the abdomen and the brain

Science of Money

  • What women really want from “girl power” ads: Six ingredients that make femvertising work
  • The seductive allure of neuroscience: Why brain talk feels so satisfying, even when it explains nothing
  • When two heads aren’t better than one: What research reveals about human-AI teamwork in marketing
  • How your personality may shape whether you pick value or growth stocks
  • New research links local employment shocks to cognitive decline in older men

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc