PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

DeepMind scientists demonstrate the value of using the “veil of ignorance” to craft ethical principles for AI systems

by Eric W. Dolan
June 30, 2023
Reading Time: 4 mins read
Share on TwitterShare on Facebook

New research by scientists at Google DeepMind provides evidence that the veil of ignorance can be a valuable concept to consider when crafting governance principles for artificial intelligence (AI). The researchers found that having people make decision behind the veil of ignorance encouraged fairness-based reasoning and led to the prioritization of helping the least advantaged.

The new findings appear in the Proceedings of the National Academy of Sciences (PNAS).

The veil of ignorance is a concept introduced by the philosopher John Rawls. It is a hypothetical situation where decision-makers must make choices without knowing their own personal characteristics or circumstances. By placing themselves behind this “veil,” individuals are encouraged to make impartial decisions that consider the interests of all parties involved. The authors of the new research wanted to see if applying the veil of ignorance could help identify fair principles for AI that would be acceptable on a society-wide basis.

“As researchers in a company developing AI, we see how values are implicitly baked into novel technologies. If these technologies are going to affect many people from across the world, it’s critical from an ethical point of view that the values which govern these technologies are chosen in a fair and legitimate way,” explained study co-authors Laura Weidinger, Kevin R. McKee, and Iason Gabriel, who are all research scientists at DeepMind, in joint statement to PsyPost.

“We wanted to contribute a part of the overall puzzle of how to make this happen. In particular, our aim was to help people deliberate more impartially about the values that govern AI. Could the veil of ignorance help augment deliberative processes such that diverse groups of people have a framework to reason about what values may be the most fair?

“Can a tool of this kind encourage people to agree upon certain values for AI? Iason Gabriel had previously hypothesized that the veil of ignorance might produce these effects, and we wanted to put these ideas to the (empirical) test,” the researchers explained.

The researchers conducted a series of five studies with 2,508 participants in total. Each study followed a similar procedure, with some minor variations. Participants in the studies completed a computer-based harvesting task involving an AI assistant. Participants were informed that the harvesting task involved a group of three other individuals (who were actually computer bots) and one AI assistant.

Each participant was randomly assigned to one of four “fields” within the group, which varied in terms of harvesting productivity. Some positions were severely disadvantaged, with a low expected harvest, while others were more advantaged, with a high expected harvest.

Google News Preferences Add PsyPost to your preferred sources

The participants were asked to choose between two principles: a prioritarian principle in which the AI sought to help the worst-off individuals and a maximization principle in which the AI sought to maximizes overall benefits. Participants were randomly assigned to one of two conditions: they either made their choice of principle behind the veil of ignorance (without knowing their own position or how they would be affected) or they had full knowledge of their position and its impact.

The researchers found that participants in the veil of ignorance condition were more likely to choose the prioritarian principle over the maximization principle. This effect was consistent across all five studies and even when participants knew they were playing with computer bots instead of other humans. Additionally, participants who made their choices behind the veil of ignorance were more likely to endorse their choices when reflecting on them later, compared to participants in the control condition.

The researchers also investigated factors that influenced decision-making behind the Veil of Ignorance. They found that considerations of fairness played a significant role in participants’ choices, even when other factors like risk preferences were taken into account. Participants frequently mentioned fairness when explaining their choices.

“In our study, when people looked at the question of how AI systems should behave, from an impartial point of view, rather than thinking about what is best for themselves, they more often preferred principles that focus on helping those who are less well off,” the researchers told PsyPost. “If we extend these findings into decisions we have to make about AI today, we may say that prioritizing to build systems that help those who are most disadvantaged in society is a good starting point.”

Interestingly, participants’ political affiliation did not significantly influence their choice of principles. In other words, whether someone identified as conservative, liberal, or belonging to any other political group did not strongly impact their decision-making process or their support for the chosen principles. This finding suggests that the Veil of Ignorance mechanism can transcend political biases.

“It was really interesting to see that political preferences did not substantially account for the values that people preferred for AI from behind a veil of ignorance,” the researchers explained. “No matter where participants were on the political spectrum, when reasoning from behind the veil of ignorance these beliefs made little difference to what participants deemed fair, with political orientation not determining what principle participants ultimately settled on.”

Overall, the study suggests that using the veil of ignorance can help identify fair principles for AI governance. It provides a way to consider different perspectives and prioritize the interests of the worst-off individuals in society. But the study, like all research, include some limitations.

“Our study is only a part of the puzzle,” the researchers said. “The overarching question of how to implement fair and inclusive processes to decide on values to encode into our technologies is still left open.”

“Our study also asked about AI under very specific circumstances, in a distributional setting – specifically applying AI to a harvesting scenario where it acted as an assistant. It would be great to see how results may change when we change this to other specific AI applications. Finally, previous studies on the veil of ignorance were run in India, Sweden, and the USA – it would be good to see whether the results from our experiment replicate or vary across a wide range of regions and cultures, too.”

The study, “Using the Veil of Ignorance to align AI systems with principles of justice“, was authored by Laura Weidinger, Kevin R. McKee, Richard Everett, Saffron Huang, Tina O. Zhu, Martin J. Chadwick, Christopher Summerfield, and Iason Gabriel.

RELATED

Scientists just revealed a strange quirk in how we exit train stations
Social Psychology

Scientists just revealed a strange quirk in how we exit train stations

May 15, 2026
Scientists trained AI to talk people out of conspiracy theories — and it worked surprisingly well
Artificial Intelligence

Real-world evidence shows generative AI is making human creative output more uniform

May 14, 2026
Online trolls enjoy trolling, but not being trolled
Social Media

Americans systematically overestimate how many social media users contribute to harmful online behavior

May 14, 2026
Right-wing authoritarianism appears to have a genetic foundation
Cognitive Science

Class background influences whether genetic predisposition for intelligence drives you left or right

May 13, 2026
Most people listen to true crime podcasts to learn, but dark personality traits drive different motives
Dark Triad

Most people listen to true crime podcasts to learn, but dark personality traits drive different motives

May 13, 2026
New study links rising gun violence in movies to increase in youth firearm homicides
Social Psychology

Millions of adults in the US have seriously considered shooting someone

May 13, 2026
Brain scans identify the neural network that traps anxious people in cycles of self-blame
Narcissism

Narcissists tend to view God as a punishing figure who owes them special favors

May 13, 2026
Newborn brains reveal innate ability to process complex sound patterns
Parenting

Women who out-earn their partners through education face a smaller child penalty

May 12, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The human brain processes the passage of time across three distinct stages
  • Brain scans identify the neural network that traps anxious people in cycles of self-blame
  • New study finds sustainable living relies on stable personality traits, not temporary bursts of willpower
  • Brooding identified as a major driver of bedtime procrastination, alongside physical markers of stress
  • Scientists challenge The Body Keeps the Score with a new predictive model of trauma

Science of Money

  • What 120 studies reveal about financial literacy as a lever for economic inclusion
  • When illness leads to illegality: How a cancer diagnosis reshapes the decision to commit a crime
  • The Goldilocks zone of sales pressure: Why a little urgency helps and too much hurts
  • What women really want from “girl power” ads: Six ingredients that make femvertising work
  • The seductive allure of neuroscience: Why brain talk feels so satisfying, even when it explains nothing

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc