Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The psychology behind the deceptive power of AI-generated images on Facebook

by Eric W. Dolan
January 8, 2026
in Artificial Intelligence, Social Media
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in Computers in Human Behavior reveals how artificial intelligence is fundamentally reshaping social media interactions by generating images that manipulate user emotions and exploit cognitive shortcuts. The research suggests that specific visual themes, such as nostalgic rural scenes or neglected children, effectively bypass critical thinking and prompt genuine engagement from users.

Social media platforms are increasingly saturated with synthetic content produced by generative artificial intelligence. Much of this content originates from “content farms,” which are websites or pages designed to maximize advertising revenue through high-volume, low-quality posts.

Márk Miskolczi, a researcher at Corvinus University of Budapest, sought to understand the mechanisms behind this growing phenomenon. While previous academic discussions have often focused on the technical aspects of deepfakes or their potential for political disinformation, less attention has been paid to everyday “clickbait” images.

“What motivated this work was a very practical problem: AI-generated images (AIGIs) are now flooding social media, and many of them are explicitly designed to trigger emotional reactions,” explained Miskolczi, an assistant professor in the Institute of Sustainable Development.

“While the public debate often focuses on technical detection (‘can we spot the glitches?’), there has been far less attention to the psychological side: why do people engage so readily with images and stories that are not real? Another gap is that much of the existing research relies on laboratory stimuli. I wanted to study this phenomenon where it actually occurs, using real user sections.”

“I was also interested in how the platform environment rewards user engagement, which creates incentives for what I describe as ‘content farms’ that mass-produce sentimental or shocking posts,” Miskolczi said. “In that sense, the study sits at the intersection of cognitive biases, online social dynamics, and an emerging attention economy powered by generative AI.”

“Ultimately, the question is not only whether an image is technically convincing, but how it becomes socially ‘validated’ as real through reactions and repeated exposure.”

The study employed a qualitative approach known as Grounded Theory to analyze the data. Miskolczi began by observing public Facebook feeds to identify pages that frequently posted suspicious imagery. This observation phase led to the selection of 12 specific Facebook pages that exhibited behavior typical of content farms. These pages covered a diverse range of topics, including rural nostalgia, elderly care, and spirituality.

Google News Preferences Add PsyPost to your preferred sources

To ensure the images analyzed were indeed artificially generated, he utilized a dual-verification process. First, the researcher applied a custom “Eight-step Manual Analysis.” This involved scrutinizing images for visual errors common in AI generation. These signals included incorrect numbers of fingers, unnatural skin textures that appeared too smooth, and objects that defied the laws of physics or gravity.

Following this manual check, images flagged as suspicious were tested using an online AI detection tool. Only images with a probability score of 60 percent or higher were included in the final sample. This rigorous process resulted in a set of 146 confirmed AI-generated images for analysis. The researcher then collected user reactions to these posts to understand how people were engaging with them.

The initial dataset consisted of 11,547 comments. Miskolczi recognized that automated accounts, or bots, often comment on posts to artificially inflate engagement.

To address this, the researcher applied a “Ten-step Manual Analysis” to identify and remove automated responses. Indicators of bot activity included repetitive phrasing, unnatural posting speeds, and generic profiles. This filtering process removed over 2,000 comments, leaving 9,082 genuine user interactions for the final analysis.

The analysis revealed distinct categories of imagery designed to provoke specific reactions. One dominant theme was “Emotion and Nostalgia.” These images often depicted elderly couples celebrating long anniversaries, often with captions claiming they had been together for decades.

Another frequent category was “Arousing Empathy,” which featured subjects in difficult circumstances, such as poverty or loneliness, asking users if they would share a coffee with them. Users responded to these images with high levels of sincerity. The data indicates that users frequently offered prayers, blessings, or words of encouragement to non-existent entities.

“One surprising element was how often highly engaging posts did not require especially sophisticated technology,” Miskolczi told PsyPost. “In many cases, a relatively low-quality or imperfect AI image still generated strong reactions if the story was emotionally compelling.”

Miskolczi found that these reactions were heavily influenced by specific cognitive biases. These are mental shortcuts that the human brain uses to make decisions quickly, often at the expense of accuracy.

Confirmation bias lead users to accept images that align with their existing worldview. For example, images depicting an idealized version of rural life reinforce the belief that the past was simpler and better.

Because the image supports a deeply held value, the user is less likely to look for evidence that it is fake. This creates a loop where the emotional resonance of the content overrides the need for verification.

The concept of “anchoring” also played a significant role in the deception. Users often focused on the immediate emotional hook provided by the image and its caption.

If a caption described a sad child who was ignored on their birthday, the user’s emotional reaction to that story became the “anchor.” This initial feeling distracted them from noticing visual glitches, such as a distorted hand or a floating object, that revealed the deception.

The “familiarity effect” further reduced skepticism among users. By presenting recognizable and comforting tropes, the images created a false sense of security.

Miskolczi notes that repetitive exposure to familiar themes fosters trust. When users encounter content that feels safe and traditional, they lower their cognitive defenses. This makes them more susceptible to manipulation by content farms seeking to monetize their attention.

The study also highlighted the role of “groupthink” in validating these images. When a post already had thousands of likes and comments, new users were more likely to trust its authenticity.

This effect was amplified by the presence of bot comments that validated the content. Real users seeing a stream of “Happy Birthday” messages felt socially compelled to join in, creating a cascade of uncritical engagement.

“I was struck by how comment sections can function as credibility engines: once supportive responses accumulate, they can ‘lock in’ the interpretation that the content must be real,” Miskolczi said.

“Another striking pattern was that many users commented not only to react to the image, but to connect, to feel less alone, to receive a reply, or to participate in a shared emotional moment. That suggests the persuasive power is not only visual; it is social and relational.”

Deceptive strategies varied in their effectiveness. The researcher found that images evoking nostalgia or depicting seniority were particularly effective at suppressing critical thinking.

“Outrage bait,” such as a child supposedly ignored on their birthday, also generated high engagement by weaponizing user empathy. In contrast, themes involving controversial topics like inter-ethnic conflict tended to receive more skepticism and debate.

The findings also point to a phenomenon known as the “dead internet theory.” This theory suggests that a significant portion of internet activity consists of bots interacting with other bots. Miskolczi observed that many real users were unknowingly directing their empathy toward automated accounts. This creates an illusion of community connection while paradoxically reinforcing social isolation.

“Social media has long rewarded emotionally charged content, and people have always relied on cognitive shortcuts when scrolling quickly,” Miskolczi explained. “What generative AI changes is the scale, speed, and cost of producing highly ‘engagement-optimized’ images and stories, so the same biases and emotional triggers can be exploited far more efficiently and at much higher volume.”

“In other words, AI-generated images don’t invent manipulation, but they amplify it and make it easier to industrialize. Over time, this can contribute to broader trust erosion: users may start doubting not only suspicious posts, but authentic photos and genuine human stories as well. It can also fuel AI-skepticism, because repeated exposure to deceptive or low-integrity AI content may generalize into distrust toward AI tools more broadly, even the beneficial ones.”

“Practically, the most useful takeaway is that people need simple, repeatable ‘manual detection’ habits, not just vague advice to ‘be skeptical,'” Miskolczi continued. “In my study, I presented two such tools: ESMA, an eight-step checklist for visually inspecting common AI artifacts (hands, faces, lighting, text, background inconsistencies), and TSMA, a ten-step guide for spotting bot-like or inauthentic commenting patterns that can artificially boost credibility through social proof.”

There are some limitations to consider regarding this research. The study focused exclusively on public Facebook pages. User behavior may differ on platforms with different demographics or interfaces, such as TikTok or Instagram.

Miskolczi suggests that future studies should apply these methods across different social media environments to see if the patterns hold true. The researcher also emphasizes the need for experimental designs to test specific psychological mechanisms more directly. Understanding exactly how much emotional framing contributes to deception could help in designing better interventions.

The study concludes that the unregulated spread of synthetic images poses a risk to platform credibility. If users cannot distinguish between real human experiences and automated fiction, trust in digital content may erode.

Miskolczi advocates for improved digital literacy campaigns. Helping users recognize visual anomalies and understand their own emotional vulnerabilities could reduce the spread of this content.

“A common misinterpretation is that susceptibility to AI-generated content is limited to specific demographic groups, such as older adults or less educated users,” Miskolczi told PsyPost. “My findings do not support this stereotype. The mechanisms involved emotional anchoring, social proof, and familiarity are basic features of human cognition and affect people across age, education, and digital skill levels.”

“In fast-scrolling environments, even highly educated or media-literate users can rely on the same shortcuts, especially when content aligns with their values or emotions. Vulnerability is therefore situational rather than demographic: it depends on context, emotional state, and platform dynamics more than on individual intelligence.”

“Framing the issue as a problem of ‘naive users’ risks missing the structural factors that make such content persuasive in the first place,” Miskolczi continued. “It also creates a false sense of immunity that may reduce critical vigilance among users who believe they are not at risk.”

“There is strong human element that deserves attention: people often respond to these posts from a place of empathy, loneliness, or the desire to connect, and that emotional openness is precisely what makes manipulation effective. If we address this only as a technical detection problem, we miss the social and psychological reasons these posts work.”

“So I would encourage readers to treat the issue as a shared responsibility: individual awareness matters, but so do platform design choices, transparency, and incentives. A healthy response is not panic but building a stronger “digital immune system” through media literacy and better systems of verification.”

The study, “The illusion of reality: How AI-generated images (AIGIs) are fooling social media users,” was authored by Márk Miskolczi.

RELATED

Younger women find men with beards less attractive than older women do
Artificial Intelligence

Bias against AI art is so deep it changes how viewers perceive color and brightness

February 13, 2026
Smartphone use before bed? It might not be as bad for teen sleep as thought, study finds
Sleep

Evening screen use may be more relaxing than stimulating for teenagers

February 12, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

AI boosts worker creativity only if they use specific thinking strategies

February 12, 2026
Psychology study sheds light on the phenomenon of waifus and husbandos
Artificial Intelligence

Psychology study sheds light on the phenomenon of waifus and husbandos

February 11, 2026
Three types of screen time linked to substance experimentation in early adolescents
Social Media

Staying off social media isn’t always a sign of a healthy social life

February 10, 2026
How people end romantic relationships: New study pinpoints three common break up strategies
Artificial Intelligence

Psychology shows why using AI for Valentine’s Day could be disastrous

February 9, 2026
Artificial intelligence predicts adolescent mental health risk before symptoms emerge
Artificial Intelligence

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

February 7, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

February 7, 2026

STAY CONNECTED

LATEST

Virtual parenting games may boost desire for real children, study finds

Donald Trump is fueling a surprising shift in gun culture, new research suggests

This mental trait predicts individual differences in kissing preferences

Strong ADHD symptoms may boost creative problem-solving through sudden insight

Who lives a good single life? New data highlights the role of autonomy and attachment

Waist-to-hip ratio predicts faster telomere shortening than depression

New research links childhood inactivity to depression in a vicious cycle

Feelings of entrapment and powerlessness link job uncertainty to suicidality

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc