Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

DeepMind scientists demonstrate the value of using the “veil of ignorance” to craft ethical principles for AI systems

by Eric W. Dolan
June 30, 2023
in Artificial Intelligence, Social Psychology
Share on TwitterShare on Facebook

New research by scientists at Google DeepMind provides evidence that the veil of ignorance can be a valuable concept to consider when crafting governance principles for artificial intelligence (AI). The researchers found that having people make decision behind the veil of ignorance encouraged fairness-based reasoning and led to the prioritization of helping the least advantaged.

The new findings appear in the Proceedings of the National Academy of Sciences (PNAS).

The veil of ignorance is a concept introduced by the philosopher John Rawls. It is a hypothetical situation where decision-makers must make choices without knowing their own personal characteristics or circumstances. By placing themselves behind this “veil,” individuals are encouraged to make impartial decisions that consider the interests of all parties involved. The authors of the new research wanted to see if applying the veil of ignorance could help identify fair principles for AI that would be acceptable on a society-wide basis.

“As researchers in a company developing AI, we see how values are implicitly baked into novel technologies. If these technologies are going to affect many people from across the world, it’s critical from an ethical point of view that the values which govern these technologies are chosen in a fair and legitimate way,” explained study co-authors Laura Weidinger, Kevin R. McKee, and Iason Gabriel, who are all research scientists at DeepMind, in joint statement to PsyPost.

“We wanted to contribute a part of the overall puzzle of how to make this happen. In particular, our aim was to help people deliberate more impartially about the values that govern AI. Could the veil of ignorance help augment deliberative processes such that diverse groups of people have a framework to reason about what values may be the most fair?

“Can a tool of this kind encourage people to agree upon certain values for AI? Iason Gabriel had previously hypothesized that the veil of ignorance might produce these effects, and we wanted to put these ideas to the (empirical) test,” the researchers explained.

The researchers conducted a series of five studies with 2,508 participants in total. Each study followed a similar procedure, with some minor variations. Participants in the studies completed a computer-based harvesting task involving an AI assistant. Participants were informed that the harvesting task involved a group of three other individuals (who were actually computer bots) and one AI assistant.

Each participant was randomly assigned to one of four “fields” within the group, which varied in terms of harvesting productivity. Some positions were severely disadvantaged, with a low expected harvest, while others were more advantaged, with a high expected harvest.

Google News Preferences Add PsyPost to your preferred sources

The participants were asked to choose between two principles: a prioritarian principle in which the AI sought to help the worst-off individuals and a maximization principle in which the AI sought to maximizes overall benefits. Participants were randomly assigned to one of two conditions: they either made their choice of principle behind the veil of ignorance (without knowing their own position or how they would be affected) or they had full knowledge of their position and its impact.

The researchers found that participants in the veil of ignorance condition were more likely to choose the prioritarian principle over the maximization principle. This effect was consistent across all five studies and even when participants knew they were playing with computer bots instead of other humans. Additionally, participants who made their choices behind the veil of ignorance were more likely to endorse their choices when reflecting on them later, compared to participants in the control condition.

The researchers also investigated factors that influenced decision-making behind the Veil of Ignorance. They found that considerations of fairness played a significant role in participants’ choices, even when other factors like risk preferences were taken into account. Participants frequently mentioned fairness when explaining their choices.

“In our study, when people looked at the question of how AI systems should behave, from an impartial point of view, rather than thinking about what is best for themselves, they more often preferred principles that focus on helping those who are less well off,” the researchers told PsyPost. “If we extend these findings into decisions we have to make about AI today, we may say that prioritizing to build systems that help those who are most disadvantaged in society is a good starting point.”

Interestingly, participants’ political affiliation did not significantly influence their choice of principles. In other words, whether someone identified as conservative, liberal, or belonging to any other political group did not strongly impact their decision-making process or their support for the chosen principles. This finding suggests that the Veil of Ignorance mechanism can transcend political biases.

“It was really interesting to see that political preferences did not substantially account for the values that people preferred for AI from behind a veil of ignorance,” the researchers explained. “No matter where participants were on the political spectrum, when reasoning from behind the veil of ignorance these beliefs made little difference to what participants deemed fair, with political orientation not determining what principle participants ultimately settled on.”

Overall, the study suggests that using the veil of ignorance can help identify fair principles for AI governance. It provides a way to consider different perspectives and prioritize the interests of the worst-off individuals in society. But the study, like all research, include some limitations.

“Our study is only a part of the puzzle,” the researchers said. “The overarching question of how to implement fair and inclusive processes to decide on values to encode into our technologies is still left open.”

“Our study also asked about AI under very specific circumstances, in a distributional setting – specifically applying AI to a harvesting scenario where it acted as an assistant. It would be great to see how results may change when we change this to other specific AI applications. Finally, previous studies on the veil of ignorance were run in India, Sweden, and the USA – it would be good to see whether the results from our experiment replicate or vary across a wide range of regions and cultures, too.”

The study, “Using the Veil of Ignorance to align AI systems with principles of justice“, was authored by Laura Weidinger, Kevin R. McKee, Richard Everett, Saffron Huang, Tina O. Zhu, Martin J. Chadwick, Christopher Summerfield, and Iason Gabriel.

Previous Post

Brain imaging study links increases in positive emotions after cold water immersion to changes in neural connectivity

Next Post

New study helps to clarify how the number and nature of childhood traumas are associated with depression outcomes

RELATED

People consistently overestimate the social backlash of changing their political beliefs, new psychology research shows
Political Psychology

People consistently overestimate the social backlash of changing their political beliefs, new psychology research shows

March 15, 2026
Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities
Racism and Discrimination

Watching violent Black video game characters increases unconscious bias in White viewers

March 14, 2026
Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities
Dark Triad

How dark personality traits predict digital abuse in romantic relationships

March 14, 2026
Anti-male gender bias deters men from healthcare, early education, and domestic career fields, study suggests
Sexism

How sexual orientation stereotypes keep men out of early childhood education

March 13, 2026
Contact with a service dog might help individuals with PTSD sleep better, study finds
Political Psychology

Veterans are no more likely than the general public to support political violence

March 13, 2026
A single Trump tweet has been connected to a rise in arrests of white Americans
Donald Trump

Texas migrant buses boosted Donald Trump’s vote share in targeted cities

March 12, 2026
Shared genetic factors uncovered between ADHD and cannabis addiction
Social Psychology

Genetic tendency for impulsivity is linked to lower education and earlier parenthood

March 12, 2026
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

The bystander effect applies to virtual agents, new psychology research shows

March 12, 2026

STAY CONNECTED

LATEST

Children with attention disorders struggle to process whole faces during social interactions

Self-guided mental imagery training shows promise in reducing anxiety

People consistently overestimate the social backlash of changing their political beliefs, new psychology research shows

Watching violent Black video game characters increases unconscious bias in White viewers

Childhood trauma leaves a lasting mark on biological systems, study finds

How dark personality traits predict digital abuse in romantic relationships

Intrinsic capacity scores predict the risk of mild cognitive impairment in older adults

Laughter plays a unique role in building a secure father-child relationship, new research suggests

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc