Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Experiment reveals limited ability to spot deepfakes, even with prior warnings

by Vladimir Hedrih
September 28, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

An experiment conducted in the UK has shown that people generally struggle to distinguish deepfake videos from authentic ones. Participants watching all authentic videos were almost as likely to report something unusual as those who watched a mix of real and deepfake content. When asked to select the deepfake video from a set of five, only 21.6% of participants correctly identified the manipulated video. The research was published in Royal Society Open Science.

Deepfake videos are artificially manipulated to appear real using deep learning techniques. These videos use artificial intelligence to superimpose faces, mimic voices, and create hyper-realistic imitations of real people, making it challenging to distinguish between real and fake content.

Initially developed for entertainment and creative purposes, deepfakes are now raising ethical and security concerns due to their potential for misuse. They can be employed to manipulate public opinion, harm reputations, or commit fraud by placing individuals in fabricated scenarios. Despite their risks, deepfakes also have legitimate applications in film, education, and digital content creation.

Study author Andrew Lewis and his colleagues wanted to explore whether people are able to recognize deepfake videos. They were interested in finding out whether people are able to point them out without any warning (without anyone telling them that there might be deepfakes among the contents they are viewing) and whether giving a warning about possible deepfakes changes the situation. For example, the researchers wanted to know if participants could identify which video in a series used deepfake technology if they were told that at least one video was altered. To test this, they designed a controlled experiment.

The study recruited 1,093 UK residents through Lucid Marketplace, an online platform for gathering survey participants. The participants were divided into three experimental groups, and the survey was conducted via Qualtrics.

In the first group, participants watched five authentic videos with no deepfakes. The second group viewed the same set of videos, but one of them was a deepfake, without the participants being warned about its presence. After watching the videos, participants were asked if they noticed anything unusual.

The third group also watched the same video set with one deepfake, but they were informed beforehand that at least one of the videos would be manipulated. They were given a brief explanation of deepfakes, described as “manipulated videos that use deep learning artificial intelligence to make fake videos that appear real,” and were explicitly told, “On the following pages are a series of five additional videos of Mr. Cruise, at least one of which is a deepfake video.” After watching, participants were asked to select which video or videos they believed to be fake.

The deepfake video in the study featured the actor Tom Cruise, with the other videos being genuine clips of him sourced from YouTube. To account for familiarity with the actor, all participants first watched a one-minute interview excerpt of Tom Cruise to provide a baseline understanding of his appearance and speech patterns.

Google News Preferences Add PsyPost to your preferred sources

The results showed that participants were largely unable to detect deepfakes. In the group that watched only authentic videos, 34% reported noticing something unusual, compared to 33% in the group that unknowingly watched a deepfake. This small difference suggests that people did not perform better at detecting deepfakes than spotting irregularities in authentic videos.

In the group that received a warning about deepfakes, 78.4% were still unable to correctly identify the manipulated video. Participants were generally more likely to mistake one of the genuine videos for a deepfake than to correctly identify the actual fake. However, among those who selected only one video, 39% correctly identified the deepfake, a rate somewhat higher than random guessing.

“We show that in natural browsing contexts, individuals are unlikely to note something unusual when they encounter a deepfake. This aligns with some previous findings indicating individuals struggle to detect high-quality deepfakes,” the study authors concluded.

“Second, we present results on the effect of content warnings on detection, showing that the majority of individuals are still unable to spot a deepfake from a genuine video, even when they are told that at least one video in a series of videos they will view has been altered. Successful content moderation—for example, with specific videos flagged as fake by social media platforms—may therefore depend not on enhancing individuals’ ability to detect irregularities in altered videos on their own, but instead on fostering trust in external sources of content authentication (particularly automated systems for deepfake detection)”, study authors concluded.”

The study sheds light on the general population’s limited ability to detect deepfake videos. However, it is important to note that deepfakes are a relatively new phenomenon, and most people have little experience in identifying them. As deepfakes become more common, it is possible that individuals may develop greater skill in spotting them.

The paper, “Deepfake detection with and without content warnings,” was authored by Andrew Lewis, Patrick Vu, Raymond M. Duch, and Areeq Chowdhury.

Previous Post

New study confirms: Thinking hard feels unpleasant

Next Post

Psychopathy tied to unlawful firearm use but not legal gun ownership, study finds

RELATED

People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Artificial Intelligence

ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Cannabinoid use is linked to both pro- and anti-inflammatory effects, massive review finds

New psychology study links relationship insecurity to the pursuit of wealth and status

Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins

Scientists wired up volunteers’ genitals and had them watch animals hump to test a long-held theory

New study sheds light on the mechanisms behind declining relationship satisfaction among new parents

A daily mindfulness habit can improve your memory for future plans

Sexualized dating profiles can sabotage long-term relationship prospects, study finds

Researchers find DMT provides longer-lasting antidepressant effects than S-ketamine in animal models

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc