Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

People believe their moral traits are too distinct for AI to judge, study finds

by Bianca Setionago
October 10, 2023
in Artificial Intelligence, Racism and Discrimination, Social Psychology
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

Artificial intelligence (AI) is rapidly being integrated into our day-to-day lives, but a new study reveals that when it comes to scoring human morals, AI might face a hurdle. The study, published in PNAS Nexus, provides evidence that there is resistance against using AI to judge moral traits because of the overarching belief that AI cannot accurately capture the intricacies and uniqueness of human morality.

Morality is the set of principles that defines what is right and wrong in behavior, guiding ethical choices. Small-scale communities can rely on personal experience and conversation to assess other individuals’ morals, but as societies grow, traditional methods of assessing morals become insufficient to implement on a large scale.

Due to obvious ethical and social challenges, there is a need to understand our general attitudes towards using AI to observe, assess and score human morals.

“While much work has been conducted on the way humans judge the morals of machines, we focus on the novel question of machines that judge the morals of humans,” claimed study authors Zoe Purcell and Jean-François Bonnefon from the University of Toulouse in France.

To investigate this, four sub-studies were conducted.

Study 1 involved 446 participants in the UK who rated the acceptability and expected quality of AI scoring on various moral traits, such as loyalty, integrity, aggression or racism.

Analyses revealed that 40% of participants found AI moral scoring unacceptable, while only 26% of participants deemed AI moral scoring as acceptable. Furthermore, acceptance was linked to perceived accuracy. In other words, people were less likely to accept the use of AI the more that they believed it was not accurate in determining their morals.

Study 2a investigated whether people overestimate the uniqueness of their moral profile by comparing participants’ beliefs about the prevalence of their profile with actual prevalence.

Google News Preferences Add PsyPost to your preferred sources

The researchers analyzed data from an existing dataset of 131,015 participants who completed online questionnaires from the “Your Morals” project (yourmorals.org). The project assessed the moral dimensions of care, fairness, loyalty, authority. These participants were characterized into 16 separate moral profiles based on how high or low their scores were for each of the four dimensions, and the researchers calculated how often each of these profiles occurred in the population.

Following this, Purcell and Bonnefon recruited 495 participants in the USA to complete the same online questionnaires, and then categorized them into one of the 16 moral profiles. The participants then were presented with an unlabelled graph which described the percentage of people with each moral profile from the general population, and then asked the participants to choose the percentage of people they thought had their profile type.

Study 2b replicated Study 2a but provided financial incentives for correctly guessing how common their moral profile was from the unlabelled graph, and analyzed a new group of 496 participants from the USA.

Together, the sub-studies discovered that 88% of participants underestimated how common their moral profile was in the population, indicating they believed their morals were more unique.

Finally, Study 3 assessed how people judged the accuracy of AI-generated profiles for unique profiles compared to typical profiles.

The researchers recruited 506 participants in the USA who completed online surveys. Similar to Study 1, they rated the acceptability and expected quality moral traits, but then were asked whether they believed that moral scoring by AI would be more accurate and trustworthy for either typical or unique profiles, and also whether they believed AI would perform well at scoring their own moral profile.

This sub-study demonstrated that participants believed AI would find difficulty scoring unique moral profiles, leading these same participants to think that AI would mischaracterize their morals.

Purcell and Bonnefon concluded, “our findings suggest that there may not be a great appetite for AI moral scoring, even when the technology gets more accurate. While this means that people may approve of strong regulations against AI moral scoring … it also means that the commercial potential of this tool might be limited, at least as long as people feel too special for machines to score their morals.”

While the research provides valuable insights into the psychological acceptability of AI moral scoring, some limitations are to be noted.

For instance, the study only focused on abstract scenarios, not specific contexts like job applications or dating. Hence, findings might not fully capture how AI moral scoring would function in real-life situations. Additionally, the study authors noted that morals were analyzed in the broad sense, and included “general moral character, specific moral traits and moral values”. Therefore, future studies could investigate the different aspects of morality with greater granularity.

The study, “Humans feel too special for machines to score their morals”, was authored by Zoe A. Purcell and Jean-François Bonnefon.

Previous Post

Brain imaging study reveals peculiarities in uncertainty processing in obsessive-compulsive disorder

Next Post

New study delves into the motivations behind climate action in the United States

RELATED

New psychology research shows that hatred is not just intense anger
Social Psychology

New research sheds light on the psychological recipe for a grudge

March 8, 2026
What is virtue signaling? The science behind moral grandstanding
Definitions

What is virtue signaling? The science behind moral grandstanding

March 8, 2026
A psychological need for certainty is associated with radical right voting
Social Psychology

Apocalyptic views are surprisingly common among Americans and predict responses to existential hazards

March 7, 2026
A psychological need for certainty is associated with radical right voting
Personality Psychology

A psychological need for certainty is associated with radical right voting

March 7, 2026
New psychology research sheds light on why empathetic people end up with toxic partners
Dark Triad

New psychology research sheds light on why empathetic people end up with toxic partners

March 7, 2026
Study sheds light on the truth behind the “deceptive stability” of abortion attitudes
Social Psychology

Abortion stigma persists at moderate levels in high-income countries

March 6, 2026
Employees who feel attractive are more likely to share ideas at work
Attractiveness

Employees who feel attractive are more likely to share ideas at work

March 6, 2026
Pro-environmental behavior is exaggerated on self-report questionnaires, particularly among those with stronger environmentalist identity
Climate

Conservatives underestimate the environmental impact of sustainable behaviors compared to liberals

March 5, 2026

STAY CONNECTED

LATEST

New research sheds light on the psychological recipe for a grudge

Eating ultra-processed foods is not linked to faster mental decline, study finds

Hypocrisy and intolerance drive religious doubt among college students

A single dose of DMT reverses depression-like symptoms in mice by repairing brain circuitry

Apocalyptic views are surprisingly common among Americans and predict responses to existential hazards

A psychological need for certainty is associated with radical right voting

Blocking a common brain gas reverses autism-like traits in mice

New psychology research sheds light on why empathetic people end up with toxic partners

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc