Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Held responsible, yet mere tools: Study reveals paradoxical views on AI assistants

by Eric W. Dolan
September 21, 2023
in Artificial Intelligence, Social Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent study published in the journal iScience aimed to uncover how people perceive the responsibility of AI assistants in scenarios involving driving. Surprisingly, the findings suggest that while people tend to attribute responsibility to AI in their assessments, they still viewed these AI systems primarily as tools, not agents deserving of moral accountability.

Artificial intelligence has become an integral part of our lives, assisting us in various tasks, from recommending movies to aiding in complex tasks like driving. However, as AI becomes more intertwined with human activities, questions about responsibility and accountability arise. How do we assess who is responsible when things go right or wrong in situations involving both humans and AI?

The researchers embarked on this study to unravel the intricate dynamics of responsibility attribution in human-AI interactions. While previous research has explored the topic, this study sought to dig deeper and examine whether people view AI as mere tools or as agents capable of sharing moral responsibility.

“Artificial Intelligence (AI) may be driving cars and serving foods in canteens in the future, but at the moment, real-life AI assistants are far removed from this kind of autonomy,” said study author Louis Longin, member of the Cognition, Values and Behavior research lab at Ludwig-Maximilians-University in Munich. “So, who is responsible in these real-life cases when something goes right or wrong? The human user? Or the AI assistant? To find out, we set up an online study where participants allocated responsibility for driving scenarios to a human driver and varying kinds of AI assistants.”

The researchers conducted two online studies, each with its own set of participants. The first study included 746 participants, while Study 2 involved 194 individuals.

The studies employed hypothetical scenarios, or vignettes, that depicted various driving situations involving a human driver and an AI assistant. The AI assistant could provide advice through either sensory cues (like steering wheel vibrations) or verbal instructions.

In the first study, participants were presented with scenarios in which the AI assistant’s status (active or inactive due to an electrical wiring problem) and the outcome of the driving scenario (positive or negative) were manipulated. They were asked to rate the responsibility, blame/praise, causality, and counterfactual capacity of both the human driver and the AI assistant.

The second study, a follow-up to the first, involved scenarios with a non-AI-powered tool (state-of-the-art fog lights) instead of an AI assistant. Again, the tool’s status was manipulated, and participants rated responsibility and related factors.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that the way AI advice was presented did not significantly influence participants’ judgments of responsibility. This suggests that people assigned responsibility to the AI assistant irrespective of how it communicated.

The presence or absence of the AI assistant had a substantial impact on participants’ assessments. When the AI assistant was active and a crash occurred, participants rated the human driver as less responsible and the AI assistant as more responsible. This pattern held true even when there was no crash. In essence, the AI’s status strongly affected how people assigned responsibility.

The outcomes of the scenarios played a significant role in participants’ judgments. When the AI assistant was inactive, it was seen as equally responsible in both negative and positive outcomes. However, when the AI assistant was active, it was perceived as significantly more responsible for positive outcomes, such as avoiding an accident, than for negative ones. This contrasted with the human driver, who did not show a similar outcome effect.

“We were surprised to find that that the AI assistants were considered more responsible for positive rather than negative outcomes,” Longin told PsyPost. “We speculate that people might apply different moral standards for praise and blame: when a crash is averted and no harm ensues, standards are relaxed, making it easier for people to assign credit than blame to non-human systems.”

Despite participants attributing responsibility to the AI assistant in their assessments, they consistently viewed the AI assistant as a tool rather than an agent with moral responsibility. This finding underscores the tension between people’s behavior in rating AI assistants and their underlying beliefs about AI as tools.

“AI assistants – irrespective of their mode of interaction (tactile or verbal communication) – are perceived as something between tools and human agents,” Longin explained. “In fact, we found that participants strongly asserted that AI assistants were just tools, yet they saw them as partly responsible for the success or failures of the human drivers who consulted them – a trait traditionally only reserved for human agents.”

Interestingly, participants did not attribute responsibility in the same way when the non-AI-powered tool was involved. Instead, the sharing of responsibility was only evident when AI technology played a role in the driving assistance. This suggests that the attribution of responsibility and the tendency to share it with a non-human agent were specific to situations where artificial intelligence was actively involved in providing assistance.

While this study provides valuable insights into human-AI interactions and perceptions of responsibility, it is not without limitations. One limitation is the need for further research to replicate these findings in different domains and cultures. Cultural norms and expectations can significantly influence how AI is perceived and held responsible.

The study, “Intelligence brings responsibility – Even smart AI assistants are held responsible“, was authored by Louis Longin, Bahador Bahrami, and Ophelia Deroy.

Previous Post

New research provides insight into how narcissistic traits influence judgements of others’ intelligence

Next Post

Fear of social change and political illiberalism mediate populism’s link to support for violence

RELATED

AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
Exploring discrepancies between anti-prejudice values and behavior
Racism and Discrimination

Scientists use brain measurements to identify a video that significantly lowers racial bias

April 1, 2026
Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression
Political Psychology

Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression

April 1, 2026
Men who favor the tradwife lifestyle often view the women in it with derision
Sexism

Men who favor the tradwife lifestyle often view the women in it with derision

April 1, 2026
Shifting genetic tides: How early language skills forecast ADHD and literacy outcomes
Authoritarianism

How a twin study untangled the surprising roots of authoritarian political beliefs

March 31, 2026
TikTok tics study sheds light on recovery trends and ongoing mental health challenges
Social Media

Researchers break down the digital habits of science influencers

March 30, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Emotional intelligence linked to better sales performance
  • When a goal-driven boss ignores relationships, manipulative employees may fight back
  • When salespeople fail to hit their targets, inner drive matters more than bonus checks
  • The “dark” personality traits that predict sales success — and when they backfire
  • What communication skills do B2B salespeople actually need in a digital-first era?

LATEST

AI autocomplete suggestions covertly change how users think about important topics

The neuroscience of hypocrisy points to a communication breakdown in the brain

How generative artificial intelligence is upending theories of political persuasion

Scientists use brain measurements to identify a video that significantly lowers racial bias

Brief mindfulness practice accelerates visual processing speeds in adults

Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression

Better parent-child communication is linked to stronger soft skills and emotional stability in teens

Men who favor the tradwife lifestyle often view the women in it with derision

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc