Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Held responsible, yet mere tools: Study reveals paradoxical views on AI assistants

by Eric W. Dolan
September 21, 2023
in Artificial Intelligence, Social Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent study published in the journal iScience aimed to uncover how people perceive the responsibility of AI assistants in scenarios involving driving. Surprisingly, the findings suggest that while people tend to attribute responsibility to AI in their assessments, they still viewed these AI systems primarily as tools, not agents deserving of moral accountability.

Artificial intelligence has become an integral part of our lives, assisting us in various tasks, from recommending movies to aiding in complex tasks like driving. However, as AI becomes more intertwined with human activities, questions about responsibility and accountability arise. How do we assess who is responsible when things go right or wrong in situations involving both humans and AI?

The researchers embarked on this study to unravel the intricate dynamics of responsibility attribution in human-AI interactions. While previous research has explored the topic, this study sought to dig deeper and examine whether people view AI as mere tools or as agents capable of sharing moral responsibility.

“Artificial Intelligence (AI) may be driving cars and serving foods in canteens in the future, but at the moment, real-life AI assistants are far removed from this kind of autonomy,” said study author Louis Longin, member of the Cognition, Values and Behavior research lab at Ludwig-Maximilians-University in Munich. “So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? To find out, we set up an online study where participants allocated responsibility for driving scenarios to a human driver and varying kinds of AI assistants.”

The researchers conducted two online studies, each with its own set of participants. The first study included 746 participants, while Study 2 involved 194 individuals.

The studies employed hypothetical scenarios, or vignettes, that depicted various driving situations involving a human driver and an AI assistant. The AI assistant could provide advice through either sensory cues (like steering wheel vibrations) or verbal instructions.

In the first study, participants were presented with scenarios in which the AI assistant’s status (active or inactive due to an electrical wiring problem) and the outcome of the driving scenario (positive or negative) were manipulated. They were asked to rate the responsibility, blame/praise, causality, and counterfactual capacity of both the human driver and the AI assistant.

The second study, a follow-up to the first, involved scenarios with a non-AI-powered tool (state-of-the-art fog lights) instead of an AI assistant. Again, the tool’s status was manipulated, and participants rated responsibility and related factors.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that the way AI advice was presented did not significantly influence participants’ judgments of responsibility. This suggests that people assigned responsibility to the AI assistant irrespective of how it communicated.

The presence or absence of the AI assistant had a substantial impact on participants’ assessments. When the AI assistant was active and a crash occurred, participants rated the human driver as less responsible and the AI assistant as more responsible. This pattern held true even when there was no crash. In essence, the AI’s status strongly affected how people assigned responsibility.

The outcomes of the scenarios played a significant role in participants’ judgments. When the AI assistant was inactive, it was seen as equally responsible in both negative and positive outcomes. However, when the AI assistant was active, it was perceived as significantly more responsible for positive outcomes, such as avoiding an accident, than for negative ones. This contrasted with the human driver, who did not show a similar outcome effect.

“We were surprised to find that that the AI assistants were considered more responsible for positive rather than negative outcomes,” Longin told PsyPost. “We speculate that people might apply different moral standards for praise and blame: when a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems.”

Despite participants attributing responsibility to the AI assistant in their assessments, they consistently viewed the AI assistant as a tool rather than an agent with moral responsibility. This finding underscores the tension between people’s behavior in rating AI assistants and their underlying beliefs about AI as tools.

“AI assistants – irrespective of their mode of interaction (tactile or verbal communication) – are perceived as something between tools and human agents,” Longin explained. “In fact, we found that participants strongly asserted that AI assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them – a trait traditionally only reserved for human agents.”

Interestingly, participants did not attribute responsibility in the same way when the non-AI-powered tool was involved. Instead, the sharing of responsibility was only evident when AI technology played a role in the driving assistance. This suggests that the attribution of responsibility and the tendency to share it with a non-human agent were specific to situations where artificial intelligence was actively involved in providing assistance.

While this study provides valuable insights into human-AI interactions and perceptions of responsibility, it is not without limitations. One limitation is the need for further research to replicate these findings in different domains and cultures. Cultural norms and expectations can significantly influence how AI is perceived and held responsible.

The study, “Intelligence brings responsibility – Even smart AI assistants are held responsible“, was authored by Louis Longin, Bahador Bahrami, and Ophelia Deroy.

Previous Post

New research provides insight into how narcissistic traits influence judgements of others’ intelligence

Next Post

Fear of social change and political illiberalism mediate populism’s link to support for violence

RELATED

Cognitive Science

Intelligent people are better judges of the intelligence of others

April 6, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
Social Psychology

The psychology of schadenfreude: an opponent’s suffering triggers a spontaneous smile

April 5, 2026
Most people dislike being gossiped about—except narcissistic men, who welcome even negative gossip
Sexism

Hostile sexism is linked to higher rates of social sabotage and gossip among young adults

April 4, 2026
Cannabis intoxication broadly impairs multiple memory types, new study shows
Evolutionary Psychology

Family dynamics predict whether parents and children agree on choosing a romantic partner

April 4, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
Schemas help older adults compensate for age-related memory decline, study finds
Cognitive Science

Your body exhibits subtle physiological changes when you engage in self-deception

April 3, 2026
Scientists reveal the impact of conspiracy theories on personal relationships and dating success
Conspiracy Theories

The exact political location where conspiracy theories thrive

April 3, 2026

STAY CONNECTED

RSS Psychology of Selling

  • New research reveals the “Goldilocks” age for social media influencers
  • What today’s shoppers really want from salespeople, and what drives them away
  • The salesperson who competes against themselves may outperform the one trying to beat everyone else
  • When sales managers serve first, salespeople stay longer and sell more confidently
  • Emotional intelligence linked to better sales performance

LATEST

How stimulating the vagus nerve could protect the brain from Alzheimer’s disease

Intelligent people are better judges of the intelligence of others

People consistently devalue creative writing generated by artificial intelligence

Psilocybin slows down human reaction times and impairs executive function during the acute phase of use

Psychological traits of scientists predict their theories and research methods

“Falling back” makes us more miserable than “springing forward,” new study finds

The psychology of schadenfreude: an opponent’s suffering triggers a spontaneous smile

The four types of dementia most people don’t know exist

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc