PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Experiment reveals limited ability to spot deepfakes, even with prior warnings

by Vladimir Hedrih
September 28, 2024
Reading Time: 3 mins read
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

An experiment conducted in the UK has shown that people generally struggle to distinguish deepfake videos from authentic ones. Participants watching all authentic videos were almost as likely to report something unusual as those who watched a mix of real and deepfake content. When asked to select the deepfake video from a set of five, only 21.6% of participants correctly identified the manipulated video. The research was published in Royal Society Open Science.

Deepfake videos are artificially manipulated to appear real using deep learning techniques. These videos use artificial intelligence to superimpose faces, mimic voices, and create hyper-realistic imitations of real people, making it challenging to distinguish between real and fake content.

Initially developed for entertainment and creative purposes, deepfakes are now raising ethical and security concerns due to their potential for misuse. They can be employed to manipulate public opinion, harm reputations, or commit fraud by placing individuals in fabricated scenarios. Despite their risks, deepfakes also have legitimate applications in film, education, and digital content creation.

Study author Andrew Lewis and his colleagues wanted to explore whether people are able to recognize deepfake videos. They were interested in finding out whether people are able to point them out without any warning (without anyone telling them that there might be deepfakes among the contents they are viewing) and whether giving a warning about possible deepfakes changes the situation. For example, the researchers wanted to know if participants could identify which video in a series used deepfake technology if they were told that at least one video was altered. To test this, they designed a controlled experiment.

The study recruited 1,093 UK residents through Lucid Marketplace, an online platform for gathering survey participants. The participants were divided into three experimental groups, and the survey was conducted via Qualtrics.

In the first group, participants watched five authentic videos with no deepfakes. The second group viewed the same set of videos, but one of them was a deepfake, without the participants being warned about its presence. After watching the videos, participants were asked if they noticed anything unusual.

The third group also watched the same video set with one deepfake, but they were informed beforehand that at least one of the videos would be manipulated. They were given a brief explanation of deepfakes, described as “manipulated videos that use deep learning artificial intelligence to make fake videos that appear real,” and were explicitly told, “On the following pages are a series of five additional videos of Mr. Cruise, at least one of which is a deepfake video.” After watching, participants were asked to select which video or videos they believed to be fake.

The deepfake video in the study featured the actor Tom Cruise, with the other videos being genuine clips of him sourced from YouTube. To account for familiarity with the actor, all participants first watched a one-minute interview excerpt of Tom Cruise to provide a baseline understanding of his appearance and speech patterns.

Google News Preferences Add PsyPost to your preferred sources

The results showed that participants were largely unable to detect deepfakes. In the group that watched only authentic videos, 34% reported noticing something unusual, compared to 33% in the group that unknowingly watched a deepfake. This small difference suggests that people did not perform better at detecting deepfakes than spotting irregularities in authentic videos.

In the group that received a warning about deepfakes, 78.4% were still unable to correctly identify the manipulated video. Participants were generally more likely to mistake one of the genuine videos for a deepfake than to correctly identify the actual fake. However, among those who selected only one video, 39% correctly identified the deepfake, a rate somewhat higher than random guessing.

“We show that in natural browsing contexts, individuals are unlikely to note something unusual when they encounter a deepfake. This aligns with some previous findings indicating individuals struggle to detect high-quality deepfakes,” the study authors concluded.

“Second, we present results on the effect of content warnings on detection, showing that the majority of individuals are still unable to spot a deepfake from a genuine video, even when they are told that at least one video in a series of videos they will view has been altered. Successful content moderation—for example, with specific videos flagged as fake by social media platforms—may therefore depend not on enhancing individuals’ ability to detect irregularities in altered videos on their own, but instead on fostering trust in external sources of content authentication (particularly automated systems for deepfake detection)”, study authors concluded.”

The study sheds light on the general population’s limited ability to detect deepfake videos. However, it is important to note that deepfakes are a relatively new phenomenon, and most people have little experience in identifying them. As deepfakes become more common, it is possible that individuals may develop greater skill in spotting them.

The paper, “Deepfake detection with and without content warnings,” was authored by Andrew Lewis, Patrick Vu, Raymond M. Duch, and Areeq Chowdhury.

RELATED

People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026
Gold digging is strongly linked to psychopathy and dark personality traits, study finds
Artificial Intelligence

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

April 30, 2026
Artificial intelligence flatters users into bad behavior
Artificial Intelligence

Artificial intelligence flatters users into bad behavior

April 26, 2026
Psychology textbooks still misrepresent famous experiments and controversial debates
Artificial Intelligence

How eye contact shapes the believability of computer-generated faces

April 24, 2026
Facebook users who ruminate and compare themselves to their friends experience increased loneliness
Artificial Intelligence

Women perceive AI as riskier than men do, study finds

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Psychologists pinpoint the conversational mechanisms that help humans bond with AI

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Unrestricted generative AI harms high school math learning by acting as a crutch

April 21, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

April 20, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The gender friendship gap is driven primarily by white men, not a universal difference across groups
  • General intelligence explains the link between math and music skills
  • New study reveals a striking gap between sexual pleasure and overall satisfaction in the U.S.
  • Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem
  • Childhood trauma linked to biological aging and gaze avoidance

Psychology of Selling

  • Relying on financial bonuses might actually be driving your sales team away, new research suggests
  • Why the most emotionally skilled salespeople still underperform without one key ingredient
  • Why cramped spaces sometimes make customers happier: The surprising science of “spatial captivity”
  • Seven seller skills that drive B2B sales performance, according to a Norwegian study
  • What makes customers stick with a salesperson? A study traces the path from trust to long-term commitment

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc