In recent days, the issue of deepfakes has taken center stage in the public consciousness, underscored by the troubling case involving globally renowned pop star Taylor Swift. Artificial Intelligence (AI) has been utilized to create highly convincing, yet entirely false, explicit imagery of Swift, sparking widespread concern and debate.
Deepfakes, a combination of “deep learning” and “fake,” are sophisticated synthetic media where a person in an existing image or video is replaced with someone else’s likeness using advanced AI technologies. While these technologies have impressive applications, they also raise significant ethical and privacy concerns, particularly when used to create explicit content without the consent of the individuals depicted.
The Taylor Swift Incident
The spread of pornographic deepfake images of Taylor Swift across social media platforms, including an image seen by over 47 million users on X (formerly known as Twitter), has reignited discussions about the legality and ethics of such content. Despite efforts by platforms to remove these images and penalize those responsible, the impact remains significant and far-reaching.
This incident has led to bipartisan calls in the United States for stricter federal laws against deepfake pornography, emphasizing the profound emotional, financial, and reputational damage it can inflict, particularly on women.
Democratic Congressman Joseph Morelle’s proposed Preventing Deepfakes of Intimate Images Act aims to criminalize the sharing of non-consensual deepfake pornography. Similarly, Republican Congressman Tom Kean Jr. has highlighted the need for safeguards against the rapid advancement of AI technology in this context, advocating for an AI Labeling Act that mandates clear labeling of all AI-generated content.
The targeting of women for the creation of non-consensual pornographic deepfakes is not new but has become more widespread with advancements in AI. A 2019 study by DeepTrace Labs found that 96% of deepfake video content was non-consenting pornographic material.
The Psychology of Deepfakes
The psychology behind the creation and sharing of deepfakes is a complex matter. Research in this area is is in its early stages, delving into the characteristics of individuals who engage in such behaviors and our ability to detect them.
Who Creates and Shares Deepfakes?
Two recent studies have shed light on the psychological and cognitive profiles of individuals who create and share deepfakes, revealing insights into their motivations and susceptibility.
The Role of Psychopathy
A study published in the journal Computers in Human Behavior shed light on the psychological characteristics associated with the creation and sharing of deepfake pornography. The findings suggested a concerning connection between psychopathic traits and the likelihood of engaging in such activities.
Psychopathy, characterized by a lack of empathy and remorse, impulsivity, and antisocial behaviors, emerged as a significant predictor in this context. Individuals scoring higher on psychopathy scales were found to be more inclined to create deepfakes. This tendency aligns with the general disregard for others’ well-being and the manipulative, risk-taking behaviors often associated with psychopathy.
The study also revealed that those with higher psychopathy scores were more prone to victim blaming and perceiving the creation of deepfakes as less harmful. This diminished sense of harm and accountability points to a dangerous rationalization process that could fuel the proliferation of such content.
“Personality traits associated with the support of deviant online behavior and sexual offending more broadly, for example psychopathy, predicted more lenient judgements of offending as well as a greater proclivity to create and share images – suggesting that through further research, we might be able to better predict people who are more likely to engage in such behavior and intervene accordingly,” the study’s lead author, Dean Fido, explained to PsyPost in 2022.
Gender and Celebrity Status in Victim Perception
The research further highlighted how the perception of deepfake incidents varies based on the victim’s gender and celebrity status. Incidents involving female victims were seen as more harmful and criminal than those involving male victims. Additionally, there was a notable difference in how men and women perceived these incidents, with men being less likely to view deepfakes of celebrities as harmful compared to those involving non-celebrities.
Cognitive Abilities and Sharing Deepfakes
Expanding on the psychological profile, another study in Heliyon examines the influence of cognitive abilities on the susceptibility to and sharing of deepfakes. The findings indicated that individuals with higher cognitive abilities, characterized by better information processing and decision-making skills, are less likely to be deceived by or share deepfakes.
Individuals with higher cognitive abilities tend to engage in more critical analysis of information, making them less susceptible to believing in the authenticity of deepfakes. This heightened skepticism also translates into a lower likelihood of sharing deepfakes, the study showed, pointing to the protective role of cognitive skills against manipulation through such advanced AI technologies.
Deepfake Detection Ability
Three recent studies have made significant strides in understanding how humans interact with and perceive deepfake technology.
The Subconscious Recognition of AI Faces
A study published in Vision Research found that while participants struggled to verbally differentiate real faces from hyper-realistic AI-generated ones, their brain activity could correctly identify AI-generated faces 54% of the time, significantly surpassing their verbal identification accuracy of 37%. This suggests a deeper, subconscious level at which our brains process and recognize the authenticity of images.
“Using behavioural and neuroimaging methods we found that it was possible to reliably detect AI-generated fake images using EEG activity given only a brief glance, even though observers could not consciously report seeing differences,” the researchers concluded. “Given that observers are already struggling with differentiating between fake and real faces, it is of immediate and practical concern to further investigate the important ways in which the brain can tell the two apart.”
Hyperrealism in AI Faces
The second study, detailed in Psychological Science and spearheaded by Amy Dawel of The Australian National University, explored the phenomenon of “hyperrealism” in AI-generated faces. This research delved into why AI faces often appear more human-like than actual human faces.
Using psychological theories like face-space theory, the study examined how AI-generated faces might embody average human features more prominently than real faces. Intriguingly, the study discovered a bias towards hyperrealism in AI-generated White faces, which were consistently perceived as more human than actual White human faces.
The Paradox of Overconfidence
This hyperrealism effect was further underscored by an overconfidence in participants who were less accurate in detecting AI faces, indicating a complex interaction between perception, bias, and confidence in identifying AI-generated content.
“We expected people would realize they weren’t very good at detecting AI, given how realistic the faces have become. We were very surprised to find people were overconfident,” Dawel told PsyPost last year. “People aren’t very good at spotting AI imposters — and if you think you are, changes are you’re making more errors than most. Our study showed that the people who were most confident made the most errors in detecting AI-generated faces.”
A Connection with Conspiracy Mentality and Social Media Use
The third study, from Telematics and Informatics, found that individuals who scored higher on a conspiracy mentality scale and those who spent more time on social media were better at identifying deepfake videos. This suggests that a more questioning mindset and regular exposure to varied media content could enhance one’s ability to discern deepfakes.
“Time spent on social media and belief in conspiracy theories are positively correlated with deepfake detection performance,” the study’s lead author, Ewout Nas, told PsyPost. “In general terms, deepfake videos are becoming more realistic and harder to recognize. One can no longer automatically believe everything you see on video footage.”