Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

People worse at detecting AI faces are more confident in their ability to spot them, study finds

by Eric W. Dolan
November 21, 2023
in Artificial Intelligence, Social Psychology
(Photo credit: OpenAI's DALL·E)

(Photo credit: OpenAI's DALL·E)

Share on TwitterShare on Facebook

In new research published in Psychological Science, a team of scientists have shed light on a perplexing phenomenon in the realm of artificial intelligence (AI): AI-generated faces can appear more “human” than actual human faces. This discovery, termed “hyperrealism,” has raised important questions about the potential consequences of AI technology in various aspects of society.

The AI revolution has brought about a significant transformation in our daily lives, with one of its prominent features being the creation of incredibly realistic AI faces. However, this progress has sparked concerns about the possible distortion of truth and the blurring of lines between reality and AI-generated content.

AI-generated faces have become increasingly accessible and are being used for both beneficial purposes, such as aiding in finding missing children, and malevolent activities, such as disseminating political misinformation through fake social media accounts. These AI faces have become so convincing that people often fail to distinguish them from real human faces.

“AI technologies are rapidly changing the way we live, work, and socialize. As a clinical psychologist, I think it’s essential we understand what these technologies are doing and how they are shaping our experience of the world,” explained study author Amy Dawel, a senior lecturer and director of the Emotions & Faces Lab at The Australian National University.

“Young and middle-aged adults will need to pivot how they work, and even what work they do, with new jobs like prompt engineering already on the table. Our children will grow up in a world that looks very different to the one we experienced. We need to do everything we can to make sure that it’s a positive experience, that leaves our next generation better off, not worse.”

To understand and explain the hyperrealism phenomenon, the researchers drew upon existing psychological theories, such as face-space theory, which posits that faces are coded in a multidimensional space based on how different they are from an average face. Human faces are believed to be distributed within this space, with average features being overrepresented. The researchers hypothesized that AI-generated faces embody these average attributes to a greater extent than real human faces.

Previous studies had shown conflicting results regarding people’s ability to distinguish AI from human faces. Some suggested that people couldn’t tell the difference, while others hinted that people might overidentify AI faces as human. These inconsistencies were partly attributed to the racial bias in the training data of AI algorithms. For instance, the StyleGAN2 algorithm, widely used for generating AI faces, was predominantly trained on White faces, potentially leading to AI faces that appear exceptionally average.

The new study began with a reanalysis of a previous experiment, which found evidence of AI hyperrealism for White faces but not for non-White faces. White AI faces were consistently perceived as more human than White human faces, suggesting a clear case of hyperrealism.

Google News Preferences Add PsyPost to your preferred sources

“Our study highlights the biases that AI is perpetuating. We found that White AI faces are perceived as more human than real people’s faces, and than other races of AI faces,” Dawel explained. “This means that White AI faces are particularly convincing, which may mean they are more influential when it comes to catfishing and spreading misinformation.”

In a subsequent experiment, the researchers recruited 124 White U.S. residents aged 18 to 50 years. Participants were tasked with differentiating between AI-generated and real human faces, specifically focusing on AI-generated White faces. They also rated their confidence in their judgments. The results replicated the hyperrealism effect, with AI-generated White faces consistently being perceived as more human than real human faces.

Surprisingly, participants who were less accurate at detecting AI-generated faces tended to be more confident in their judgments. This overconfidence further accentuated the tendency for AI hyperrealism.

“We expected people would realize they weren’t very good at detecting AI, given how realistic the faces have become. We were very surprised to find people were overconfident,” Dawel told PsyPost. “People aren’t very good at spotting AI imposters — and if you think you are, changes are you’re making more errors than most. Our study showed that the people who were most confident made the most errors in detecting AI-generated faces.”

In a second experiment, 610 participants were asked to rate a variety of attributes of AI and human faces. The participants were asked to rate the faces on 14 different attributes, including distinctiveness/averageness, memorability, familiarity, attractiveness, and others. Unlike Experiment 1, participants were not informed that AI faces were present, and those who guessed that AI faces were part of the study were excluded.

The results showed that several attributes influenced whether faces were perceived as human. Faces were more likely to be judged as human if they appeared more proportional, alive in the eyes, and familiar. On the other hand, they were less likely to be judged as human if they were memorable, symmetrical, attractive, and smooth-skinned.

The researchers also used a lens model to investigate how each of the 14 attributes contributed to the misjudgment of AI faces as human. They found that AI faces were more average (less distinctive), familiar, and attractive, and less memorable than human faces. AI hyperrealism was primarily explained by attributes that were utilized in the wrong direction, such as facial proportions, familiarity, and memorability. In contrast, attributes that were utilized in the correct direction, such as facial attractiveness, symmetry, and congruent lighting/shadows, had a smaller effect.

Furthermore, the researchers conducted a machine learning experiment to determine if human-perceived attributes could be used to accurately classify AI and human faces. Using a random forest classification model, they were able to achieve a high accuracy rate of 94% in classifying face types (AI vs. human) based on the 14 attributes identified in Experiment 2. This suggests that AI faces, particularly those generated by StyleGAN2, can be reliably distinguished from human faces using human-perceived attributes.

“The main problem right now is that a lot of the AI technology is not transparent,” Dawel said. “We don’t know how it is being trained, so we don’t have much insight into the biases it is producing. There is an urgent need for research funding to independent bodies, like universities, who can investigate what’s happening and provide ethical guidance.”

“Government needs to step in and require companies to disclose what their AI is trained on and put in place systems for protecting against bias. If you are a parent, now is the time to lobby your local minister for action on regulating AI, to ensure it benefits rather than harms our children. Companies that are creating AI should be required to have independent oversight.”

The study, “AI Hyperrealism: Why AI Faces Are Perceived as More Real Than Human Ones“, was authored by Elizabeth J. Miller, Ben A. Steward, Zak Witkower, Clare A. M. Sutherland, Eva G. Krumhuber, and Amy Dawel.

 

RELATED

Artificial intelligence predicts adolescent mental health risk before symptoms emerge
Artificial Intelligence

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

February 7, 2026
The surprising way the brain’s dopamine-rich reward center adapts as a romance matures
Neuroimaging

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

February 7, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

February 7, 2026
Support for banning hate speech tends to decrease as people get older
Political Psychology

Support for banning hate speech tends to decrease as people get older

February 6, 2026
New psychology research changes how we think about power in the bedroom
Relationships and Sexual Health

New psychology research changes how we think about power in the bedroom

February 6, 2026
Sorting Hat research: What does your Hogwarts house say about your psychological makeup?
Relationships and Sexual Health

This behavior explains why emotionally intelligent couples are happier

February 6, 2026
Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

Deceptive AI interactions can feel more deep and genuine than actual human conversations

February 5, 2026
A new experiment reveals an unexpected shift in how pregnant women handle intimidation
Evolutionary Psychology

A new experiment reveals an unexpected shift in how pregnant women handle intimidation

February 5, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Evolutionary psychology’s “macho” face ratio theory has a major flaw

Reduction in PTSD symptoms linked to better cognitive performance in new study of veterans

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

Self-kindness leads to a psychologically rich life for teenagers, new research suggests

Borderline personality disorder in youth linked to altered brain activation during self-identity processing

Biological sex influences how blood markers reflect Alzheimer’s severity

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

The scientist who predicted AI psychosis has issued another dire warning

RSS Psychology of Selling

  • Sales agents often stay for autonomy rather than financial rewards
  • The economics of emotion: Reassessing the link between happiness and spending
  • Surprising link found between greed and poor work results among salespeople
  • Intrinsic motivation drives sales performance better than financial rewards
  • New research links faking emotions to higher turnover in B2B sales
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy