Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Robots that lie: How humans feel about AI deception in different scenarios

by Angharad Brewer Gillham
September 8, 2024
in Artificial Intelligence
Share on TwitterShare on Facebook

Humans don’t just lie to deceive: sometimes we lie to avoid hurting others, breaking one social norm to uphold another. As robots begin to transition from tools to team members working alongside humans, scientists need to find out how these norms about deception apply to robots. To investigate this, researchers asked people to give their opinions of three scenarios in which robots were deceptive. They found that a robot lying about the external world to spare someone pain was acceptable, but a robot lying about its own capabilities wasn’t — and that people usually blame third parties like developers for unacceptable deceptions.

Honesty is the best policy… most of the time. Social norms help humans understand when we need to tell the truth and when we shouldn’t, to spare someone’s feelings or avoid harm. But how do these norms apply to robots, which are increasingly working with humans? To understand whether humans can accept robots telling lies, scientists asked almost 500 participants to rate and justify different types of robot deception.

“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Andres Rosero, PhD candidate at George Mason University and lead author of the article in Frontiers in Robotics and AI. “With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”

Three kinds of lie

The scientists selected three scenarios which reflected situations where robots already work — medical, cleaning, and retail work — and three different deception behaviors. These were external state deceptions, which lie about the world beyond the robot, hidden state deceptions, where a robot’s design hides its capabilities, and superficial state deceptions, where a robot’s design overstates its capabilities.

In the external state deception scenario, a robot working as a caretaker for a woman with Alzheimer’s lies that her late husband will be home soon. In the hidden state deception scenario, a woman visits a house where a robot housekeeper is cleaning, unaware that the robot is also filming. Finally, in the superficial state deception scenario, a robot working in a shop as part of a study on human-robot relations untruthfully complains of feeling pain while moving furniture, causing a human to ask someone else to take the robot’s place.


Read and download original article


What a tangled web we weave

The scientists recruited 498 participants and asked them to read one of the scenarios and then answer a questionnaire. This asked participants whether they approved of the robot’s behavior, how deceptive it was, if it could be justified, and if anyone else was responsible for the deception. These responses were coded by the researchers to identify common themes and analyzed.

The participants disapproved most of the hidden state deception, the housecleaning robot with the undisclosed camera, which they considered the most deceptive. While they considered the external state deception and the superficial state deception to be moderately deceptive, they disapproved more of superficial state deception, where a robot pretended it felt pain. This may have been perceived as manipulative.

Google News Preferences Add PsyPost to your preferred sources

Participants approved most of the external state deception, where the robot lied to a patient. They justified the robot’s behavior by saying that it protected the patient from unnecessary pain — prioritizing the norm of sparing someone’s feelings over honesty.

The ghost in the machine

Although participants were able to present justifications for all three deceptions — for instance, some people suggested the housecleaning robot might film for security reasons — most participants declared that the hidden state deception could not be justified. Similarly, about half the participants responding to the superficial state deception said it was unjustifiable. Participants tended to blame these unacceptable deceptions, especially hidden state deceptions, on robot developers or owners.

“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities, because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” said Rosero. “We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.” However, the scientists cautioned that this research needs to be extended to experiments which could model real-life reactions better — for example, videos or short roleplays.

“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” explained Rosero. “Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors.”

Previous Post

Does your partner’s drinking hurt your mental health? Men may feel it most

Next Post

Reversing aging’s impact on brain waste clearance: New study highlights promising drug

RELATED

Young children are more likely to trust information from robots over humans
Artificial Intelligence

The presence of robot eyes affects perception of mind

February 21, 2026
Psychology study reveals a fascinating fact about artwork
Artificial Intelligence

AI art fails to trigger the same empathy as human works

February 20, 2026
ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

AI chatbots generate weight loss coaching messages perceived as helpful as human-written advice

February 16, 2026
Scientists use machine learning to control specific brain circuits
Artificial Intelligence

Scientists use machine learning to control specific brain circuits

February 14, 2026
Younger women find men with beards less attractive than older women do
Artificial Intelligence

Bias against AI art is so deep it changes how viewers perceive color and brightness

February 13, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

AI boosts worker creativity only if they use specific thinking strategies

February 12, 2026
Psychology study sheds light on the phenomenon of waifus and husbandos
Artificial Intelligence

Psychology study sheds light on the phenomenon of waifus and husbandos

February 11, 2026
How people end romantic relationships: New study pinpoints three common break up strategies
Artificial Intelligence

Psychology shows why using AI for Valentine’s Day could be disastrous

February 9, 2026

STAY CONNECTED

LATEST

Adding extra salt to your food might increase your risk of depression

Reading may protect older adults against loneliness better than some social activities

Neurological risks rise as vaccination rates fall and measles returns

New research suggests the “lying flat” lifestyle actively decreases long-term happiness

A one-month behavioral treatment for social anxiety lowers hostile interpretations of others

Caffeine might ease anxiety and depression by calming brain inflammation

People with synesthesia experience distinct thematic patterns in their dreams

Scientists map the brain waves behind the intense effects of ayahuasca

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc