Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Knowing an AI is involved ruins human trust in social games

by Karina Petrova
March 28, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

When artificial intelligence steps in to make decisions for people during social interactions, human partners respond with less trust, fairness, and cooperation, ultimately leading to worse outcomes for everyone involved. However, when people are unsure if an automated program is pulling the strings, they behave normally and often secretly rely on the technology themselves. These findings were recently published in the journal PNAS Nexus.

Artificial intelligence programs known as large language models can generate human-like text and answer complex queries. These tools are increasingly integrated into daily life, helping people draft emails, settle disputes, and make choices with social consequences. The almost universal nature of these text-based models makes them fundamentally different from older algorithms designed for narrow tasks like playing chess or sorting data.

As these conversational algorithms take a more active role in mediating communication, it raises a question about how people respond to a machine acting on behalf of another human. Interactions online are increasingly text-based, meaning algorithms will likely play a larger role in shaping human relationships. Understanding how people react to this shift is a major goal for behavioral scientists.

Past research explored how people interact with specialized systems designed to optimize very specific tasks. Yet few studies have looked at how everyday conversational artificial intelligence affects cooperation between real people. Behavioral researcher Fabian Dvorak at the University of Konstanz in Germany led a team of economists to investigate this phenomenon.

The team wanted to observe human behavior when an algorithm steps in to make choices that directly impact other people’s earnings. To study this, the researchers used a series of classic economic games. These are standardized scenarios used by economists and psychologists to measure social behaviors like reciprocity, cooperation, and altruism.

In these exercises, participants make choices that determine how real money is split between themselves and an anonymous partner. Over three thousand participants from online platforms were recruited to play five types of two-player economic games. One setup was the Ultimatum Game, where one person proposes how to divide a pot of money and the partner must either accept or reject the offer.

If the partner rejects the split in the Ultimatum Game, neither person receives any money. This forces the proposer to offer a fair amount to avoid a total loss. Another scenario was the Trust Game, where sending money to a partner multiplies its total value. The receiving partner then decides how much of the multiplied total to return to the original sender, testing their trustworthiness.

The researchers also included the Prisoner’s Dilemma, an exercise where both players can cooperate for mutual gain or betray each other for selfish gain. Another scenario, the Stag Hunt Game, requires players to choose between hunting a large stag together for a high reward or selfishly catching a hare alone for a smaller, guaranteed reward. If one person goes for the stag and the other defects to the hare, the stag hunter gets nothing.

Google News Preferences Add PsyPost to your preferred sources

Finally, the testing included a coordination exercise. In this game, players select an option from a list, such as a planet in the solar system. They earn a larger payout only if both players independently select the exact same option without communicating beforehand.

In each match, one participant had the option or the requirement to let a version of ChatGPT make their decision. Both participants would eventually receive the monetary payout based on the final choices. The researchers varied whether the human partner knew about the algorithm’s involvement in the game.

The study included conditions where the artificial intelligence took over randomly and openly. In other setups, participants could choose to hand off their choice to the machine. The team then tested situations where this handoff was either completely transparent to the human partner or kept hidden from them entirely.

The team also experimented with personalizing the algorithm to match the user’s specific personality traits. Participants answered a short survey about their preferences, such as whether they favored introversion over extraversion or truth over harmony. The researchers fed these answers into the initial instructions given to ChatGPT to shape its subsequent decisions.

The results indicated a strong negative reaction to the known use of artificial intelligence. Across all five games, participants acted less cooperatively and demonstrated lower levels of trust when they knew the other person’s decision came from ChatGPT. This early hesitance resulted in a financial loss for both players.

Participants expected the algorithm to be self-serving and less fair than a human player. As a result, they went on the defensive early in the games. They offered less money in the split scenarios and refused to cooperate in the team exercises, assuming the algorithm would betray them. In the Stag Hunt Game, participants assumed the machine would not cooperate, making them more likely to settle for the safe option of hunting a hare.

The irony of the situation was that the algorithm itself tended to make cooperative and generous choices. The artificial intelligence usually opted to maximize the overall payout for both players. The financial losses in these transparent scenarios were driven entirely by the human players proactively withholding trust.

The Coordination Game was the only exception. The artificial intelligence rarely chose the most popular or expected answer, causing coordination failures. This was the one instance where the machine’s choices directly lowered the payout by acting unpredictably.

A totally different pattern emerged when the involvement of the artificial intelligence was kept a secret. When participants were uncertain whether they were interacting with a human or a machine, they behaved exactly as they would with a normal human partner. Their financial returns did not drop in these hidden conditions.

Despite the negative social reactions, people still liked using the technology for their own benefit. When given the chance to let the machine make the decision, participants did so roughly forty percent of the time. They were especially likely to hand over control when the other player would not know about the machine.

Making a decision in these games requires cognitive effort and carries a risk of feeling guilty for selfish choices. Delegating the task to the program removes the burden of writing a justification and shields the human player from feeling personally responsible for the outcome.

Attempting to make the algorithm act more like the specific user did not fix the financial losses. While the decisionmakers felt the customized algorithm represented them somewhat better, the human partners still reacted negatively. Knowing a machine was involved was enough to trigger defensive behavior, regardless of its programmed personality.

In a secondary test, the team asked independent evaluators to figure out which decisions were made by a human and which came from the algorithm. When looking only at the final numerical choice, evaluators could not distinguish between the two. The algorithm blended in perfectly with normal human behavior.

However, the evaluators could spot the artificial intelligence when they were allowed to read short written justifications for the decisions. The algorithm tended to use technical language focused on potential monetary outcomes. Human players usually explained their choices in simpler, more casual language.

The researchers noted a few limitations to their work. The participants primarily came from specific online platforms in the United Kingdom, meaning the findings might not apply to every demographic group. The study only looked at isolated, one-time interactions between strangers.

In the real world, people might adapt their behavior if they interact with the same automated program repeatedly over a longer period. It is also possible that other methods of personalizing the program could yield different social reactions. The current study relied on direct text prompts to shape the language model’s behavior, which is only one way to customize the tool.

The study raises questions about new technology regulations designed to increase transparency. Laws like the European Union’s Artificial Intelligence Act require companies to disclose when content is generated by a machine. Informing people that an algorithm is involved might unintentionally damage trust and hurt economic cooperation.

Future research will need to explore how to build public trust in these systems rather than simply disclosing their presence. Until people feel comfortable interacting with machines, mandatory transparency could lead to defensive behavior and lower social cohesion.

The study, “Adverse reactions to the use of large language models in social interactions,” was authored by Fabian Dvorak, Regina Stumpf, Sebastian Fehrler, and Urs Fischbacher.

Previous Post

Brain scans reveal how poor sleep fuels negative emotions in alcohol addiction

Next Post

Co-occurring depression and cannabis use linked to less efficient brain networks

RELATED

Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Most Americans don’t fear an AI apocalypse, according to new research

March 26, 2026
AI can generate images that are just as effective at triggering human emotions as traditional photographs
Artificial Intelligence

AI can generate images that are just as effective at triggering human emotions as traditional photographs

March 24, 2026
ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

Efforts to make AI inclusive accidentally create bizarre new gender biases, new research suggests

March 22, 2026
Building muscle strength may help prevent depression, especially in women
Artificial Intelligence

News chatbots that present multiple viewpoints tend to earn the trust of conspiracy believers

March 20, 2026
Lifelong diet quality predicts cognitive ability and dementia risk in older age
Artificial Intelligence

Emotionally intelligent AI chatbots improve mental health but destroy real-world social ties

March 19, 2026
AI-assisted venting can boost psychological well-being, study suggests
Artificial Intelligence

Popular AI chatbots generate unsafe diet plans for teenagers

March 18, 2026
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Using AI to verify human advice could damage your professional relationships

March 17, 2026
LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques
Artificial Intelligence

Artificial intelligence struggles to consistently evaluate scientific facts

March 17, 2026

STAY CONNECTED

RSS Psychology of Selling

  • The “dark” personality traits that predict sales success — and when they backfire
  • What communication skills do B2B salespeople actually need in a digital-first era?
  • A founder’s smile may be worth millions in startup funding, research suggests
  • What actually makes millennials buy products on sale?
  • The surprising coping strategy that may help salespeople avoid burnout

LATEST

The psychological difference between playing video games to relax and playing to win

Women who hate men: Study finds similarities in gendered hate speech on Reddit

Severe emotional outbursts in ADHD are linked to distinct brain differences, study finds

Depression in early adolescence is linked to attention problems that worsen over time

Cannabis use exacerbates paranoia in survivors of chaotic childhoods, new study suggests

Limiting social media to one hour a day reduces loneliness in distressed individuals

Does crying actually make you feel better? New psychology research shows it depends on a key factor

Countries holding stronger precarious manhood beliefs tend to be less happy, study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc