Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Experiment reveals limited ability to spot deepfakes, even with prior warnings

by Vladimir Hedrih
September 28, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An experiment conducted in the UK has shown that people generally struggle to distinguish deepfake videos from authentic ones. Participants watching all authentic videos were almost as likely to report something unusual as those who watched a mix of real and deepfake content. When asked to select the deepfake video from a set of five, only 21.6% of participants correctly identified the manipulated video. The research was published in Royal Society Open Science.

Deepfake videos are artificially manipulated to appear real using deep learning techniques. These videos use artificial intelligence to superimpose faces, mimic voices, and create hyper-realistic imitations of real people, making it challenging to distinguish between real and fake content.

Initially developed for entertainment and creative purposes, deepfakes are now raising ethical and security concerns due to their potential for misuse. They can be employed to manipulate public opinion, harm reputations, or commit fraud by placing individuals in fabricated scenarios. Despite their risks, deepfakes also have legitimate applications in film, education, and digital content creation.

Study author Andrew Lewis and his colleagues wanted to explore whether people are able to recognize deepfake videos. They were interested in finding out whether people are able to point them out without any warning (without anyone telling them that there might be deepfakes among the contents they are viewing) and whether giving a warning about possible deepfakes changes the situation. For example, the researchers wanted to know if participants could identify which video in a series used deepfake technology if they were told that at least one video was altered. To test this, they designed a controlled experiment.

The study recruited 1,093 UK residents through Lucid Marketplace, an online platform for gathering survey participants. The participants were divided into three experimental groups, and the survey was conducted via Qualtrics.

In the first group, participants watched five authentic videos with no deepfakes. The second group viewed the same set of videos, but one of them was a deepfake, without the participants being warned about its presence. After watching the videos, participants were asked if they noticed anything unusual.

The third group also watched the same video set with one deepfake, but they were informed beforehand that at least one of the videos would be manipulated. They were given a brief explanation of deepfakes, described as “manipulated videos that use deep learning artificial intelligence to make fake videos that appear real,” and were explicitly told, “On the following pages are a series of five additional videos of Mr. Cruise, at least one of which is a deepfake video.” After watching, participants were asked to select which video or videos they believed to be fake.

The deepfake video in the study featured the actor Tom Cruise, with the other videos being genuine clips of him sourced from YouTube. To account for familiarity with the actor, all participants first watched a one-minute interview excerpt of Tom Cruise to provide a baseline understanding of his appearance and speech patterns.

The results showed that participants were largely unable to detect deepfakes. In the group that watched only authentic videos, 34% reported noticing something unusual, compared to 33% in the group that unknowingly watched a deepfake. This small difference suggests that people did not perform better at detecting deepfakes than spotting irregularities in authentic videos.

In the group that received a warning about deepfakes, 78.4% were still unable to correctly identify the manipulated video. Participants were generally more likely to mistake one of the genuine videos for a deepfake than to correctly identify the actual fake. However, among those who selected only one video, 39% correctly identified the deepfake, a rate somewhat higher than random guessing.

“We show that in natural browsing contexts, individuals are unlikely to note something unusual when they encounter a deepfake. This aligns with some previous findings indicating individuals struggle to detect high-quality deepfakes,” the study authors concluded.

“Second, we present results on the effect of content warnings on detection, showing that the majority of individuals are still unable to spot a deepfake from a genuine video, even when they are told that at least one video in a series of videos they will view has been altered. Successful content moderation—for example, with specific videos flagged as fake by social media platforms—may therefore depend not on enhancing individuals’ ability to detect irregularities in altered videos on their own, but instead on fostering trust in external sources of content authentication (particularly automated systems for deepfake detection)”, study authors concluded.”

The study sheds light on the general population’s limited ability to detect deepfake videos. However, it is important to note that deepfakes are a relatively new phenomenon, and most people have little experience in identifying them. As deepfakes become more common, it is possible that individuals may develop greater skill in spotting them.

The paper, “Deepfake detection with and without content warnings,” was authored by Andrew Lewis, Patrick Vu, Raymond M. Duch, and Areeq Chowdhury.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Understanding “neuronal ensembles” could revolutionize addiction treatment

Not bothered by celebrity infidelity? This psychological trait might be why

Genetic factors may influence how well exercise buffers against childhood trauma

Tips for parents in talking with your kids about your partner’s mental illness

Subjective cognitive struggles strongly linked to social recovery in depression

New research suggests the conservative mental health advantage is a myth

FACT CHECK: Does cheese cause nightmares? Here’s what the science actually says

Scientists just uncovered a surprising illusion in how we remember time

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy