Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Held responsible, yet mere tools: Study reveals paradoxical views on AI assistants

by Eric W. Dolan
September 21, 2023
in Artificial Intelligence, Social Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent study published in the journal iScience aimed to uncover how people perceive the responsibility of AI assistants in scenarios involving driving. Surprisingly, the findings suggest that while people tend to attribute responsibility to AI in their assessments, they still viewed these AI systems primarily as tools, not agents deserving of moral accountability.

Artificial intelligence has become an integral part of our lives, assisting us in various tasks, from recommending movies to aiding in complex tasks like driving. However, as AI becomes more intertwined with human activities, questions about responsibility and accountability arise. How do we assess who is responsible when things go right or wrong in situations involving both humans and AI?

The researchers embarked on this study to unravel the intricate dynamics of responsibility attribution in human-AI interactions. While previous research has explored the topic, this study sought to dig deeper and examine whether people view AI as mere tools or as agents capable of sharing moral responsibility.

“Artificial Intelligence (AI) may be driving cars and serving foods in canteens in the future, but at the moment, real-life AI assistants are far removed from this kind of autonomy,” said study author Louis Longin, member of the Cognition, Values and Behavior research lab at Ludwig-Maximilians-University in Munich. “So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? To find out, we set up an online study where participants allocated responsibility for driving scenarios to a human driver and varying kinds of AI assistants.”

The researchers conducted two online studies, each with its own set of participants. The first study included 746 participants, while Study 2 involved 194 individuals.

The studies employed hypothetical scenarios, or vignettes, that depicted various driving situations involving a human driver and an AI assistant. The AI assistant could provide advice through either sensory cues (like steering wheel vibrations) or verbal instructions.

In the first study, participants were presented with scenarios in which the AI assistant’s status (active or inactive due to an electrical wiring problem) and the outcome of the driving scenario (positive or negative) were manipulated. They were asked to rate the responsibility, blame/praise, causality, and counterfactual capacity of both the human driver and the AI assistant.

The second study, a follow-up to the first, involved scenarios with a non-AI-powered tool (state-of-the-art fog lights) instead of an AI assistant. Again, the tool’s status was manipulated, and participants rated responsibility and related factors.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that the way AI advice was presented did not significantly influence participants’ judgments of responsibility. This suggests that people assigned responsibility to the AI assistant irrespective of how it communicated.

The presence or absence of the AI assistant had a substantial impact on participants’ assessments. When the AI assistant was active and a crash occurred, participants rated the human driver as less responsible and the AI assistant as more responsible. This pattern held true even when there was no crash. In essence, the AI’s status strongly affected how people assigned responsibility.

The outcomes of the scenarios played a significant role in participants’ judgments. When the AI assistant was inactive, it was seen as equally responsible in both negative and positive outcomes. However, when the AI assistant was active, it was perceived as significantly more responsible for positive outcomes, such as avoiding an accident, than for negative ones. This contrasted with the human driver, who did not show a similar outcome effect.

“We were surprised to find that that the AI assistants were considered more responsible for positive rather than negative outcomes,” Longin told PsyPost. “We speculate that people might apply different moral standards for praise and blame: when a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems.”

Despite participants attributing responsibility to the AI assistant in their assessments, they consistently viewed the AI assistant as a tool rather than an agent with moral responsibility. This finding underscores the tension between people’s behavior in rating AI assistants and their underlying beliefs about AI as tools.

“AI assistants – irrespective of their mode of interaction (tactile or verbal communication) – are perceived as something between tools and human agents,” Longin explained. “In fact, we found that participants strongly asserted that AI assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them – a trait traditionally only reserved for human agents.”

Interestingly, participants did not attribute responsibility in the same way when the non-AI-powered tool was involved. Instead, the sharing of responsibility was only evident when AI technology played a role in the driving assistance. This suggests that the attribution of responsibility and the tendency to share it with a non-human agent were specific to situations where artificial intelligence was actively involved in providing assistance.

While this study provides valuable insights into human-AI interactions and perceptions of responsibility, it is not without limitations. One limitation is the need for further research to replicate these findings in different domains and cultures. Cultural norms and expectations can significantly influence how AI is perceived and held responsible.

The study, “Intelligence brings responsibility – Even smart AI assistants are held responsible“, was authored by Louis Longin, Bahador Bahrami, and Ophelia Deroy.

Previous Post

New research provides insight into how narcissistic traits influence judgements of others’ intelligence

Next Post

Fear of social change and political illiberalism mediate populism’s link to support for violence

RELATED

New psychology research shows that hatred is not just intense anger
Social Psychology

New research sheds light on the psychological recipe for a grudge

March 8, 2026
What is virtue signaling? The science behind moral grandstanding
Definitions

What is virtue signaling? The science behind moral grandstanding

March 8, 2026
A psychological need for certainty is associated with radical right voting
Social Psychology

Apocalyptic views are surprisingly common among Americans and predict responses to existential hazards

March 7, 2026
A psychological need for certainty is associated with radical right voting
Personality Psychology

A psychological need for certainty is associated with radical right voting

March 7, 2026
New psychology research sheds light on why empathetic people end up with toxic partners
Dark Triad

New psychology research sheds light on why empathetic people end up with toxic partners

March 7, 2026
Study sheds light on the truth behind the “deceptive stability” of abortion attitudes
Social Psychology

Abortion stigma persists at moderate levels in high-income countries

March 6, 2026
Employees who feel attractive are more likely to share ideas at work
Attractiveness

Employees who feel attractive are more likely to share ideas at work

March 6, 2026
Pro-environmental behavior is exaggerated on self-report questionnaires, particularly among those with stronger environmentalist identity
Climate

Conservatives underestimate the environmental impact of sustainable behaviors compared to liberals

March 5, 2026

STAY CONNECTED

LATEST

New research sheds light on the psychological recipe for a grudge

Eating ultra-processed foods is not linked to faster mental decline, study finds

Hypocrisy and intolerance drive religious doubt among college students

A single dose of DMT reverses depression-like symptoms in mice by repairing brain circuitry

Apocalyptic views are surprisingly common among Americans and predict responses to existential hazards

A psychological need for certainty is associated with radical right voting

Blocking a common brain gas reverses autism-like traits in mice

New psychology research sheds light on why empathetic people end up with toxic partners

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc