Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

New scientific findings highlight the complexity of moral judgments of AI agents

by Vladimir Hedrih
June 29, 2023
in Artificial Intelligence, Social Psychology
Share on TwitterShare on Facebook

Recent research conducted in China found that people have different opinions about the morality of decisions made by artificial intelligence (AI) agents depending on the situation. In certain situations, participants rated AI decisions as more immoral than the same decisions made by humans, but in other types of situations, their judgement depended on specific decisions that agents made. The study was published in Behavioral Sciences.

AI systems have been rapidly advancing in the past decade and are being used in various roles such as wealth management, autonomous driving, consulting, criminal assessment, and medical diagnostics. These systems make decisions or provide suggestions that can impact people’s property, health, and overall lives. This has sparked a debate about whether these decisions made by AI systems are ethical and whether it is ethical to use AI to make such decisions at all.

This debate is further complicated by the fact that people often do not perceive AI as technological tools used by humans, but rather as autonomous agents that should be accountable for their actions. However, research so far has not been conclusive about whether people indeed perceive AI as independent agents or just as tools used by humans.

A crucial aspect of the debate is the moral judgments people make regarding the actions of AI. Moral judgments are evaluations people make when moral norms are violated. They involve judging something as good or bad, right or wrong, permissible or impermissible. Making moral judgments is a fundamental human response. Some studies suggest that people are more critical when evaluating moral judgments made by AI systems, while others suggest that they are more lenient compared to judgments made by humans.

In their new study, Yuyan Zhang and her colleagues wanted to examine how people make moral judgements about the behavior of artificial intelligence agents in different moral dilemmas. A moral dilemma is a situation where a person faces conflicting ethical choices, making it challenging to determine the morally right course of action. In such situations, people tend to make decisions based on the expected consequences (utilitarian choice) or based on accepted moral norms and rules, regardless of the consequences (deontological choice). The researchers organized three experiments to explore this.

The first experiment aimed to determine whether people apply the same moral norms and judgments to human and AI agents in moral dilemmas primarily driven by cognitive processes. The participants were 195 undergraduate students with an average age of 19 years, and the majority were females. They read about a human or an AI agent facing a trolley dilemma.

In this dilemma, the character must choose between taking no action and allowing five people to die when a speeding trolley runs over them, or taking action by redirecting the trolley to a second track where it would kill only one person instead of the five. The participants were randomly assigned to four groups, with each group reading about a specific type of agent (AI or human) making a particular choice in the dilemma (inaction or taking action). After reading the scenario, participants were asked to rate the morality, permissibility, wrongness, and blameworthiness of the agent’s behavior.

The second experiment had a similar design but presented a slightly different moral dilemma. The participants were 194 undergraduate students with an average age of 19 years, and the majority were females. They were divided into four groups, with each group reading about a specific combination of agent type and choice made. In this experiment, the participants read about the decision in the footbridge dilemma.

Google News Preferences Add PsyPost to your preferred sources

This dilemma is similar to the trolley dilemma, where the agent must decide whether to act or not. In this scenario, a speeding trolley is heading towards five people, and the agent can choose to do nothing and allow the tragedy to happen, or push a person off a bridge in front of the trolley. If the agent chooses to push the person off the bridge, that individual will die but the trolley will be stopped, thus saving the other five people down the track.

In the third experiment, the researchers wanted to ensure that the results were not simply due to differences in the moral views of the participants in the previous studies. The participants were 236 undergraduate students, and each of them read one variant of a trolley dilemma scenario and one variant of a footbridge dilemma scenario. As in the previous experiments, each scenario had four variants (human vs. AI, action vs. inaction), and the researchers randomly assigned which participant read which scenario.

The results of the first experiment showed that participants judged the decision of the human agent as more moral than the decision of the AI agent, regardless of what the decision was. The AI agent was also assigned more blame for the decision. On average, the action and inaction were rated as equally moral and equally deserving of blame. There were no differences in the average ratings of permissibility or wrongness for either of the two decisions or agents.

In the second experiment, participants rated the decision to intentionally push a person off a bridge (to save five other people) as less moral than the decision to do nothing, regardless of whether it was made by a human or an AI. This action was also seen as less permissible, more wrong, and the agent who made it was considered more deserving of blame, regardless of whether it was a human or an AI.

In the third experiment, participants rated the action in the footbridge dilemma as less moral and more wrong than the action in the trolley dilemma, but they considered inaction in both dilemmas to be equally moral or wrong. There were no differences in moral and wrongness judgments based on the agent type. The average ratings of morality and wrongness were the same regardless of whether the decision-making agent was a human or an AI. However, participants in this experiment judged the actions of the AI in the trolley dilemma as less permissible and more deserving of blame than those of a human, while there was no difference in the footbridge dilemma.

“Overall, these findings revealed that, in the trolley dilemma, people are more interested in the difference between humans and AI agents than action versus inaction,” the researchers concluded. “Conversely, in the footbridge dilemma, people are more interested in action versus inaction. It may be explained that people made moral judgments driven by different response processes in these two dilemmas—controlled cognitive processes occur often in response to dilemmas such as the trolley dilemma and automatic emotional responses occur often in response to dilemmas such as the footbridge dilemma.

“Thus, in the trolley dilemma, controlled cognitive processes may drive people’s attention to the agent type and make the judgment that it is inappropriate for AI agents to make moral decisions. In the footbridge dilemma, the action of pushing someone off a footbridge may evoke a stronger negative emotion than the action of operating a switch in the trolley dilemma.”

The study provides valuable insights into how humans perceive the morality of AI actions. However, it is important to note that the study has limitations. All participants were undergraduate students, and moral judgments were examined using well-known moral dilemma models. The results might differ when considering other age groups or using less familiar moral dilemmas.

The study, “Moral Judgments of Human vs. AI Agents in Moral Dilemmas”, was authored by Yuyan Zhang, Jiahua Wu, Feng Yu, and Liying Xu.

Previous Post

Neural responses to reward may serve as a transdiagnositic brain-based marker of suicidal ideation

Next Post

Scientists recently discovered some important facts about the phenomenon of “blue balls”

RELATED

Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
Exploring discrepancies between anti-prejudice values and behavior
Racism and Discrimination

Scientists use brain measurements to identify a video that significantly lowers racial bias

April 1, 2026
Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression
Political Psychology

Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression

April 1, 2026
Men who favor the tradwife lifestyle often view the women in it with derision
Sexism

Men who favor the tradwife lifestyle often view the women in it with derision

April 1, 2026
Shifting genetic tides: How early language skills forecast ADHD and literacy outcomes
Authoritarianism

How a twin study untangled the surprising roots of authoritarian political beliefs

March 31, 2026
TikTok tics study sheds light on recovery trends and ongoing mental health challenges
Social Media

Researchers break down the digital habits of science influencers

March 30, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Psychopathy

Psychopathic traits are linked to a lack of physical and emotional connection during face-to-face interactions

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Emotional intelligence linked to better sales performance
  • When a goal-driven boss ignores relationships, manipulative employees may fight back
  • When salespeople fail to hit their targets, inner drive matters more than bonus checks
  • The “dark” personality traits that predict sales success — and when they backfire
  • What communication skills do B2B salespeople actually need in a digital-first era?

LATEST

How generative artificial intelligence is upending theories of political persuasion

Scientists use brain measurements to identify a video that significantly lowers racial bias

Brief mindfulness practice accelerates visual processing speeds in adults

Belief in the harmfulness of speech is linked to both progressive ideology and symptoms of depression

Better parent-child communication is linked to stronger soft skills and emotional stability in teens

Men who favor the tradwife lifestyle often view the women in it with derision

A diet based on ultra-processed foods impairs metabolic and reproductive health, study finds

Psychologists identify nine core habits associated with healthy non-monogamous partnerships

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc