A study published in the journal F1000Research in 2023 suggests that specific personality traits, particularly honesty and agreeableness, can predict how confident young adults feel in their ability to spot deepfake videos. The findings provide evidence that our underlying psychological makeup shapes our perceived vulnerability to sophisticated artificial intelligence deception.
Deepfake technology relies on artificial intelligence to create highly realistic, manipulated videos or audio recordings of real people. These programs study thousands of images or voice clips to generate synthetic media depicting people saying or doing things that never actually happened. As these digital forgeries become harder to distinguish from reality, they pose a growing threat to personal privacy and accurate information.
Scientists wanted to understand why some individuals feel more capable of recognizing these digital forgeries than others. A person’s belief in their own capability to succeed in a specific situation is known in psychology as self-efficacy. Past research indicates that self-efficacy is often heavily influenced by fundamental personality traits.
By examining these underlying psychological characteristics, the researchers aimed to map out how different personality profiles influence a person’s confidence in identifying deceptive media. Understanding this relationship helps scientists build better strategies for improving digital literacy and media resilience.
​”As a social psychologist, I am fascinated by the intersection of human integrity and technological evolution and am concerned with how information technology is not a neutral tool, but one that can be used to consolidate power by manipulating reality. Deepfakes represent a new frontier of perceptual enclosure, where our very ability to witness the truth is being challenged. I wanted to investigate whether our inherent personality traits offer a form of natural defense or, conversely, a vulnerability to the sophisticated systems that now dictate our digital environment,” explained study author Juneman Abraham, a professor and vice rector of research and technology transfer at BINUS University.
For their study, the scientists focused on the HEXACO model of human personality. This framework categorizes human personality into six broad dimensions. These six dimensions include honesty-humility, emotionality, extraversion, agreeableness, conscientiousness, and openness to experience.
The researchers recruited 200 young adults from Indonesia to participate in an online survey. The sample included 139 women and 61 men, all between the ages of 18 and 25, with an average age of just over 22 years. This specific age group was selected because young adults are highly active online and frequently encounter digital media.
Participants completed a standardized 60-item questionnaire to measure their six HEXACO personality traits. They also answered a custom questionnaire designed to assess their specific self-efficacy in recognizing manipulated media.
This custom measure asked participants to rate their confidence in noticing unnatural elements in photos and videos. For instance, participants rated how capable they felt at spotting abnormal eye movements, mismatched skin tones, or awkward facial expressions that did not match the emotion being spoken.
The statistical analysis revealed that only two of the six personality traits significantly predicted a person’s confidence in detecting deepfakes. Specifically, honesty-humility and agreeableness showed strong but opposing relationships with deepfake detection self-efficacy.
People who scored high in honesty-humility tended to report lower confidence in their ability to spot deepfakes. This personality trait involves a reluctance to manipulate others and a general lack of interest in breaking rules or accumulating wealth.
The researchers suggest that individuals with high honesty-humility might be less accepting of manipulative technologies in general. As a result, they may feel overwhelmed by the highly deceptive nature of deepfakes and doubt their own capacity to identify them.
In contrast to this, individuals who scored high in agreeableness reported higher confidence in their ability to detect artificial intelligence manipulations. Agreeableness reflects a person’s tendency to be cooperative, trusting, and willing to reach a compromise with others.
The scientists propose that agreeable people might have more faith in collective intelligence and shared forensic tools. This cooperative mindset tends to boost their confidence in using community resources or the wisdom of the crowd to navigate digital spaces safely.
“​The most important takeaway is that individual confidence is an unreliable shield against systemic deception,” Abraham told PsyPost. “Our study found that Agreeableness correlates with higher self-efficacy, which suggests that a person’s willingness to cooperate and trust (traits essential for social cohesion) are being directly tested by AI.”
“However, the negative correlation with Honesty-Humility warns us that those who are most grounded and cautious may actually feel the most vulnerable. For the average person, especially in non-Western contexts where communal trust is a vital social currency, we must realize that digital literacy is not just a technical skill, but a form of social resilience.”
The other four personality traits did not significantly predict self-efficacy. Emotionality, extraversion, conscientiousness, and openness to experience showed no clear impact on how confident the young adults felt.
“​It was telling that traits traditionally associated with ‘individual success’ in a market-driven society, such as Conscientiousness and Openness, did not significantly predict a person’s confidence in recognizing deepfakes,” Abraham said. “This suggests that the ‘virtues’ of the individual are insufficient when facing the scale of algorithmic deception. It highlights that the problem is not a lack of individual ‘effort’ or ‘intelligence,’ but rather a systemic asymmetry between the creators of these technologies and the people who consume them.”
The statistical analysis also showed no significant difference in self-efficacy between men and women. Both genders reported similar levels of overall confidence in their ability to recognize manipulated digital media.
While these findings offer an insightful look into digital psychology, there are a few limitations to keep in mind. The most significant limitation is that the study measured subjective confidence, not actual accuracy in spotting deepfakes.
People often overestimate their own skills, a psychological phenomenon known as the Dunning-Kruger effect. It is entirely possible that individuals who feel highly confident in their detection abilities might actually perform poorly when tested with real deepfake videos.
“There is a profound danger in a false sense of security provided by technology or personality,” Abraham said. “Furthermore, our research was conducted among young adults in Indonesia. This is important because non-Western societies often have different psychological responses to authority and collective information compared to the Western populations that most AI research focuses on. By democratizing this research, we can help societies in the Global South build their own digital defense systems that are culturally relevant and resistant to external manipulation.”
Future research should involve testing participants with actual deepfake media to compare their perceived confidence against their real world accuracy. The scientists also recommend using randomized sampling methods to confirm whether these personality traits directly cause changes in digital media awareness.
“My long-term goal is to solidify the framework of Digital Psychoethics as a necessary response to the challenges of our era,” Abraham explained. “If you look at the trajectory of my publications on Google Scholar, there is a very consistent thread, i.e. an effort to understand human integrity within the context of structural pressures. My academic journey, ranging from in-depth studies on the Psychology of Corruption and Academic Integrity to the development of Psychoinformatics, is a logical evolution to address how human honesty is tested when AI begins to hijack the narrative of reality.”
“​I view deepfakes not merely as a hiccup, but as a new form of reality tunnel that threatens to erode human agency, particularly in non-Western contexts and the Global South. Consequently, my commitment to Open Science, which I have consistently advocated for across various forums and writings, serves as a form of resistance against the commodification of truth. My aim is to ensure that psychological knowledge regarding AI mitigation does not become a private monopoly or a tool for the tech elite, but rather remains a public good that empowers ordinary people to build collective resilience against the systemic manipulation of reality.”
“​We must stop viewing AI deception as a mere technical glitch and start seeing it as a psychological challenge to human sovereignty in an increasingly automated world,” Abraham added. “At a time when our shared reality is being partitioned and sold to the highest bidder, maintaining social trust requires us to understand the human vulnerabilities that these systems are designed to exploit.”
The study, “Prediction of self-efficacy in recognizing deepfakes based on personality traits,” was authored by Juneman Abraham, Heru Alamsyah Putra, Tommy Prayoga, Harco Leslie Hendric Spits Warnars, Rudi Hartono Manurung, and Togiaratua Nainggolan.