Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Groundbreaking psychology research sheds light on the trust dynamics of human-machine collectives

by Eric W. Dolan
June 20, 2023
in Artificial Intelligence, Social Psychology
Share on TwitterShare on Facebook

A series of psychological experiments has shown that people’s treatment of machines differs from their treatment of humans, which influences the establishment of trust within human-machine collectives. The new findings provide insights into the dynamics of human-bot interactions and have implications for understanding human behavior in the emerging context of AI systems. The research was published in Nature Communications.

The researchers were interested in studying how humans and bots interact in online communities and how their behavior is influenced by social norms. They wanted to understand the challenges that arise in mixed human-bot collectives, similar to those faced by human societies, such as cooperation, exploitation, and norm stabilization.

“Ever since I was a child, I have always been fascinated by human-bot collectives and their societal implications, inspired by classic movies such as Terminator 2 and Blade Runner,” said study co-author Talal Rahwan, an associate professor of computer science at New York University Abu Dhabi.

“My fascination with these issues grew bigger after I received my PhD in Artificial Intelligence, but I felt that studying human-bot interactions should include the perspective of social scientists and psychologists. With this in mind, I sought out two collaborators, Kinga Makovi (a sociologist) and Jean-François Bonnefon (a psychologist), and together we developed the current study of cooperation and punishment in human-bot collectives.”

To conduct their study, the researchers performed a series of online experiments with a total of 7,917 participants. They created a stylized society (a simplified and artificial representation of a social system) in which participants could take on different roles: Beneficiaries, Helpers, Punishers, and Trustors. The participants played economic games with real financial consequences, which served as proxies for real-life interactions.

In all the experiments, the distinction between human and bot participants was communicated to participants through textual descriptions (referring to humans as “MTurk worker” and bots as “Bot”) and stylized images of robots or people. This information was presented on the screen where participants made their choices.

Humans typically earn trust by sharing and punishing those who don’t share, but the researchers observed that this trust was less pronounced when humans interacted with bots. Sharing or punishing behaviors towards bots didn’t lead to as much trust as when humans interacted with other humans.

In other words, sharing resources with bots resulted in a smaller increase in trust compared to sharing with humans. Similarly, people who did not share with bot beneficiaries were less likely to be punished compared to those who did not share with humans. Bots also did not receive the same level of trust gain as humans when they shared resources. As a result, trust was not easily established in these mixed human-bot communities, which led to worse collective outcomes.

Google News Preferences Add PsyPost to your preferred sources

But the trust gains were not completely eliminated when interacting with bots. This suggests that people carried assumptions about social norms from human societies into these mixed communities, the researchers said.

“It is known that people can signal their trustworthiness to others by acting cooperatively, or by punishing those who do not cooperate with others. We show that that same holds in human-bot collectives, albeit to a lesser extent,” explained co-author Kinga Makovi, an assistant professor at New York University Abu Dhabi.

“More specifically, in our experiments, the trust gained by sharing resources with a bot was less than the trust gained when sharing with a fellow human. We saw something similar when it comes to punishing a bot as opposed to a human. Importantly though, the trust-gains for ‘doing the right thing’ (sharing or punishing those who do not share) were only attenuated, rather than eliminated, suggesting that people carry into human-bot societies similar assumptions about the social norms that they have long relied on within human societies.”

Additionally, the researchers found that when participants were informed about the high consensus regarding the norm of sharing, trust gains generally increased. This suggests that people may alter their behavior once they are made aware of social norms.

“Previous attempts to increase trust and cooperation between humans and bots often tried to make bots look more like humans, but this approach led to disappointing results,” noted co-author Jean-François Bonnefon, a research director at the Toulouse School of Economics.

“We show that there is a better approach: instead of trying to pull bots into the circle of trust by giving them a human appearance, you can nudge people to expand the circle of trust so that it reaches out to nonhuman bots. This is done by making them aware that social norms are shifting, and that many people are starting to think it is a good thing to cooperate with bots, even if they don’t yet realize it is a common opinion.”

The researchers noted that stylized societies with incentivized interactions are useful for studying human cooperation in the lab. However, it may not be as suitable for studying human-bot cooperation since bots have no use for money. Participants recognized that bots didn’t desire money but acted as if they did, possibly because prosocial behavior towards bots is seen as a signal to other humans.

“It is totally understandable that people can signal their trustworthiness by acting cooperatively with fellow humans, but it is surprising that they can also do so by acting cooperatively with bots, or by punishing bots who do not act cooperatively,” Rahwan told PsyPost. “After all, machines do not have emotions or needs, and so one could argue that it is perfectly fine not to share with a bot, or that it is pointless to punish a bot. Yet, our study shows otherwise.”

Like any study, the new research includes some caveats. The sample consisted of online participants, primarily from Amazon Mechanical Turk, who tend to be younger, more educated, and more technologically savvy. The results may not generalize to older or less technologically savvy populations. In addition, the use of a stylized society allowed for experimental control but may yield different results in other contexts.

“Our conclusions are based on an experimental setup that is meant to capture a ‘stylized society’ of humans and bots,” Makovi said. “Will people act differently when facing bots in the field? This remains to be seen.”

“The paper would not have come to fruition without Wendi Li who was an undergraduate student at NYU Abu Dhabi when we started the study, and Anahit Sargsyan who supported the data collection and analysis of the multiple iterations,” Rahwan added.

The study, “Trust within human-machine collectives depends on the perceived consensus about cooperative norms“, was authored by Kinga Makovi, Anahit Sargsyan, Wendi Li, Jean-François Bonnefon, and Talal Rahwan.

Previous Post

Young adults who embrace “lying flatism” also tend to see romantic relationships as unnecessary for happiness

Next Post

Eye-tracking study confirms that asexual individuals exhibit reduced attention to erotic stimuli

RELATED

New Harry Potter study links Gryffindor and Slytherin personalities to heightened entrepreneurship
Relationships and Sexual Health

New study links watching TikTok “thirst traps” to lower relationship trust and satisfaction

April 14, 2026
Romances with narcissists don’t deteriorate the way psychologists expected
Narcissism

Romances with narcissists don’t deteriorate the way psychologists expected

April 14, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing
Social Psychology

120-year text analysis reveals how society’s view of lawyers’ personalities has shifted

April 13, 2026
Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing
Mental Health

Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing

April 13, 2026
Psychology researchers identify a “burnout to extremism” pipeline
Narcissism

Narcissistic traits are linked to a brain area governing emotional control

April 12, 2026
Albumin and cognitive decline: Common urine test may help predict dementia risk
Neuroimaging

Reduced gray matter and altered brain connectivity are linked to problematic smartphone use

April 12, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds
  • Should your marketing tell a story or state the facts? A massive meta-analysis has answers
  • When brands embrace diversity, some customers pull away — and new research explains why

LATEST

This Mediterranean‑style diet is linked to a slower loss of brain volume as we age

Psychologists map out the pathways connecting sacred beliefs to better sex

Why thinking hard feels bad: the emotional root of deliberation

New study links watching TikTok “thirst traps” to lower relationship trust and satisfaction

Ketone esters show promise as a new treatment for alcohol use disorder

Psychedelic therapy and traditional antidepressants show similar results under open-label conditions

Romances with narcissists don’t deteriorate the way psychologists expected

New research links personality traits to confidence in recognizing artificial intelligence deception

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc