Neuroscientists have discovered that the human brain can detect AI-generated deepfake images better than chance, even when individuals cannot verbally identify which images are real and which are fake. The findings, published in Vision Research, indicate that the brain has the ability to distinguish between deepfakes and authentic images, despite individuals’ limited conscious awareness of the distinction.
“Throughout history, humans have been regarded as the benchmark for face detection. We have consistently outperformed computers in recognizing and classifying faces (although this is changing),” said study author Mic Moshel, a PhD candidate in clinical neuropsychology at Macquarie University.
“However, the emergence of AI has presented a significant challenge in reliably determining whether a face is artificially generated. Intrigued by this development, we sought to investigate how humans respond to hyper-realistic AI-generated faces, specifically exploring the ability to differentiate between real and fake.”
To investigated how the human brain processes and interprets artificially generated images, the researchers conducted two experiments, one involving behavioral testing and the other using neuroimaging techniques. Two hundred participants were recruited from Amazon Mechanical Turk for the behavioral testing and 22 participants were recruited from the University of Sydney for the neuroimaging component.
The researchers used artificial neural networks called Generative Adversarial Networks (GANs) to generate the stimuli. These stimuli consisted of realistic and unrealistic images of faces, cars, and bedrooms. Real images were also obtained from training sets used for GANs. The images were standardized and presented to the participants in both upright and inverted orientations.
In the behavioral testing, participants viewed the images online and had to quickly determine whether each image was real or fake. The images were presented for a short duration, and participants made their judgments based on their immediate visual impressions. This testing aimed to assess how well untrained observers could distinguish between real and fake images.
In the neuroimaging component, participants underwent EEG recordings while viewing the images. The EEG data was analyzed to investigate the brain’s response to real and fake images. The researchers focused on identifying any differences in brain activity patterns between the two types of images.
Participants were able to reliably distinguish between real and unrealistic AI-generated faces, but they struggled to differentiate real faces from realistic AI-generated faces. The orientation of the images (upright or inverted) did not significantly affect their ability to discriminate.
“Our findings revealed that individuals can potentially recognize AI-generated faces given only a brief glance. Nevertheless, distinguishing genuine faces from AI-generated ones proves to be more challenging. Surprisingly, people frequently exhibit the tendency to mistakenly perceive AI-generated faces as more authentic than real faces.”
The researchers found that despite the participants having difficulties discriminating between real and realistic faces, distinct neural representations were observed in the brain. Participants’ brain activity could identify AI-generated faces 54 percent of the time, while verbal identification accuracy was only 37 percent. The findings suggest that the distinction between real and realistic faces can be successfully decoded from neural activity, but this distinction may not be reflected in behavioral performance.
“Through the examination of brain activity, we identified a discernible signal responsible for differentiating between real and AI-generated faces. However, the precise reason why this signal is not utilized to guide behavioural decision-making remains uncertain.”
The findings of this study could have implications for various areas such as cybersecurity, counterfeiting, fake news, and border security, where the ability to distinguish between real and fake images is crucial.
“Using behavioural and neuroimaging methods we found that it was possible to reliably detect AI-generated fake images using EEG activity given only a brief glance, even though observers could not consciously report seeing differences,” the researchers concluded. “Given that observers are already struggling with differentiating between fake and real faces, it is of immediate and practical concern to further investigate the important ways in which the brain can tell the two apart.”
“It is becoming increasingly possible to rapidly and effortlessly generate realistic fake images, videos, writing, and multimedia that are practically indiscernible from real. This capacity is only going to become more widespread and has profound implications for cybersecurity, fake news, detection bypass, and social media.”
The study, “Are you for real? Decoding realistic AI-generated faces from neural activity“, was authored by Michoel L. Moshel, Amanda K. Robinson, Thomas A. Carlson, and Tijl Grootswagers.