Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The psychology behind the deceptive power of AI-generated images on Facebook

by Eric W. Dolan
January 8, 2026
in Artificial Intelligence, Social Media
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in Computers in Human Behavior reveals how artificial intelligence is fundamentally reshaping social media interactions by generating images that manipulate user emotions and exploit cognitive shortcuts. The research suggests that specific visual themes, such as nostalgic rural scenes or neglected children, effectively bypass critical thinking and prompt genuine engagement from users.

Social media platforms are increasingly saturated with synthetic content produced by generative artificial intelligence. Much of this content originates from “content farms,” which are websites or pages designed to maximize advertising revenue through high-volume, low-quality posts.

Márk Miskolczi, a researcher at Corvinus University of Budapest, sought to understand the mechanisms behind this growing phenomenon. While previous academic discussions have often focused on the technical aspects of deepfakes or their potential for political disinformation, less attention has been paid to everyday “clickbait” images.

“What motivated this work was a very practical problem: AI-generated images (AIGIs) are now flooding social media, and many of them are explicitly designed to trigger emotional reactions,” explained Miskolczi, an assistant professor in the Institute of Sustainable Development.

“While the public debate often focuses on technical detection (‘can we spot the glitches?’), there has been far less attention to the psychological side: why do people engage so readily with images and stories that are not real? Another gap is that much of the existing research relies on laboratory stimuli. I wanted to study this phenomenon where it actually occurs, using real user sections.”

“I was also interested in how the platform environment rewards user engagement, which creates incentives for what I describe as ‘content farms’ that mass-produce sentimental or shocking posts,” Miskolczi said. “In that sense, the study sits at the intersection of cognitive biases, online social dynamics, and an emerging attention economy powered by generative AI.”

“Ultimately, the question is not only whether an image is technically convincing, but how it becomes socially ‘validated’ as real through reactions and repeated exposure.”

The study employed a qualitative approach known as Grounded Theory to analyze the data. Miskolczi began by observing public Facebook feeds to identify pages that frequently posted suspicious imagery. This observation phase led to the selection of 12 specific Facebook pages that exhibited behavior typical of content farms. These pages covered a diverse range of topics, including rural nostalgia, elderly care, and spirituality.

Google News Preferences Add PsyPost to your preferred sources

To ensure the images analyzed were indeed artificially generated, he utilized a dual-verification process. First, the researcher applied a custom “Eight-step Manual Analysis.” This involved scrutinizing images for visual errors common in AI generation. These signals included incorrect numbers of fingers, unnatural skin textures that appeared too smooth, and objects that defied the laws of physics or gravity.

Following this manual check, images flagged as suspicious were tested using an online AI detection tool. Only images with a probability score of 60 percent or higher were included in the final sample. This rigorous process resulted in a set of 146 confirmed AI-generated images for analysis. The researcher then collected user reactions to these posts to understand how people were engaging with them.

The initial dataset consisted of 11,547 comments. Miskolczi recognized that automated accounts, or bots, often comment on posts to artificially inflate engagement.

To address this, the researcher applied a “Ten-step Manual Analysis” to identify and remove automated responses. Indicators of bot activity included repetitive phrasing, unnatural posting speeds, and generic profiles. This filtering process removed over 2,000 comments, leaving 9,082 genuine user interactions for the final analysis.

The analysis revealed distinct categories of imagery designed to provoke specific reactions. One dominant theme was “Emotion and Nostalgia.” These images often depicted elderly couples celebrating long anniversaries, often with captions claiming they had been together for decades.

Another frequent category was “Arousing Empathy,” which featured subjects in difficult circumstances, such as poverty or loneliness, asking users if they would share a coffee with them. Users responded to these images with high levels of sincerity. The data indicates that users frequently offered prayers, blessings, or words of encouragement to non-existent entities.

“One surprising element was how often highly engaging posts did not require especially sophisticated technology,” Miskolczi told PsyPost. “In many cases, a relatively low-quality or imperfect AI image still generated strong reactions if the story was emotionally compelling.”

Miskolczi found that these reactions were heavily influenced by specific cognitive biases. These are mental shortcuts that the human brain uses to make decisions quickly, often at the expense of accuracy.

Confirmation bias lead users to accept images that align with their existing worldview. For example, images depicting an idealized version of rural life reinforce the belief that the past was simpler and better.

Because the image supports a deeply held value, the user is less likely to look for evidence that it is fake. This creates a loop where the emotional resonance of the content overrides the need for verification.

The concept of “anchoring” also played a significant role in the deception. Users often focused on the immediate emotional hook provided by the image and its caption.

If a caption described a sad child who was ignored on their birthday, the user’s emotional reaction to that story became the “anchor.” This initial feeling distracted them from noticing visual glitches, such as a distorted hand or a floating object, that revealed the deception.

The “familiarity effect” further reduced skepticism among users. By presenting recognizable and comforting tropes, the images created a false sense of security.

Miskolczi notes that repetitive exposure to familiar themes fosters trust. When users encounter content that feels safe and traditional, they lower their cognitive defenses. This makes them more susceptible to manipulation by content farms seeking to monetize their attention.

The study also highlighted the role of “groupthink” in validating these images. When a post already had thousands of likes and comments, new users were more likely to trust its authenticity.

This effect was amplified by the presence of bot comments that validated the content. Real users seeing a stream of “Happy Birthday” messages felt socially compelled to join in, creating a cascade of uncritical engagement.

“I was struck by how comment sections can function as credibility engines: once supportive responses accumulate, they can ‘lock in’ the interpretation that the content must be real,” Miskolczi said.

“Another striking pattern was that many users commented not only to react to the image, but to connect, to feel less alone, to receive a reply, or to participate in a shared emotional moment. That suggests the persuasive power is not only visual; it is social and relational.”

Deceptive strategies varied in their effectiveness. The researcher found that images evoking nostalgia or depicting seniority were particularly effective at suppressing critical thinking.

“Outrage bait,” such as a child supposedly ignored on their birthday, also generated high engagement by weaponizing user empathy. In contrast, themes involving controversial topics like inter-ethnic conflict tended to receive more skepticism and debate.

The findings also point to a phenomenon known as the “dead internet theory.” This theory suggests that a significant portion of internet activity consists of bots interacting with other bots. Miskolczi observed that many real users were unknowingly directing their empathy toward automated accounts. This creates an illusion of community connection while paradoxically reinforcing social isolation.

“Social media has long rewarded emotionally charged content, and people have always relied on cognitive shortcuts when scrolling quickly,” Miskolczi explained. “What generative AI changes is the scale, speed, and cost of producing highly ‘engagement-optimized’ images and stories, so the same biases and emotional triggers can be exploited far more efficiently and at much higher volume.”

“In other words, AI-generated images don’t invent manipulation, but they amplify it and make it easier to industrialize. Over time, this can contribute to broader trust erosion: users may start doubting not only suspicious posts, but authentic photos and genuine human stories as well. It can also fuel AI-skepticism, because repeated exposure to deceptive or low-integrity AI content may generalize into distrust toward AI tools more broadly, even the beneficial ones.”

“Practically, the most useful takeaway is that people need simple, repeatable ‘manual detection’ habits, not just vague advice to ‘be skeptical,'” Miskolczi continued. “In my study, I presented two such tools: ESMA, an eight-step checklist for visually inspecting common AI artifacts (hands, faces, lighting, text, background inconsistencies), and TSMA, a ten-step guide for spotting bot-like or inauthentic commenting patterns that can artificially boost credibility through social proof.”

There are some limitations to consider regarding this research. The study focused exclusively on public Facebook pages. User behavior may differ on platforms with different demographics or interfaces, such as TikTok or Instagram.

Miskolczi suggests that future studies should apply these methods across different social media environments to see if the patterns hold true. The researcher also emphasizes the need for experimental designs to test specific psychological mechanisms more directly. Understanding exactly how much emotional framing contributes to deception could help in designing better interventions.

The study concludes that the unregulated spread of synthetic images poses a risk to platform credibility. If users cannot distinguish between real human experiences and automated fiction, trust in digital content may erode.

Miskolczi advocates for improved digital literacy campaigns. Helping users recognize visual anomalies and understand their own emotional vulnerabilities could reduce the spread of this content.

“A common misinterpretation is that susceptibility to AI-generated content is limited to specific demographic groups, such as older adults or less educated users,” Miskolczi told PsyPost. “My findings do not support this stereotype. The mechanisms involved emotional anchoring, social proof, and familiarity are basic features of human cognition and affect people across age, education, and digital skill levels.”

“In fast-scrolling environments, even highly educated or media-literate users can rely on the same shortcuts, especially when content aligns with their values or emotions. Vulnerability is therefore situational rather than demographic: it depends on context, emotional state, and platform dynamics more than on individual intelligence.”

“Framing the issue as a problem of ‘naive users’ risks missing the structural factors that make such content persuasive in the first place,” Miskolczi continued. “It also creates a false sense of immunity that may reduce critical vigilance among users who believe they are not at risk.”

“There is strong human element that deserves attention: people often respond to these posts from a place of empathy, loneliness, or the desire to connect, and that emotional openness is precisely what makes manipulation effective. If we address this only as a technical detection problem, we miss the social and psychological reasons these posts work.”

“So I would encourage readers to treat the issue as a shared responsibility: individual awareness matters, but so do platform design choices, transparency, and incentives. A healthy response is not panic but building a stronger “digital immune system” through media literacy and better systems of verification.”

The study, “The illusion of reality: How AI-generated images (AIGIs) are fooling social media users,” was authored by Márk Miskolczi.

Previous Post

Restoring cellular energy transfer heals nerve damage in mice

Next Post

Sudden drop in fentanyl overdose deaths linked to Biden-era global supply shock

RELATED

New Harry Potter study links Gryffindor and Slytherin personalities to heightened entrepreneurship
Relationships and Sexual Health

New study links watching TikTok “thirst traps” to lower relationship trust and satisfaction

April 14, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing
Mental Health

Disrupted sleep is the primary pathway linking problematic social media use to reduced wellbeing

April 13, 2026
Albumin and cognitive decline: Common urine test may help predict dementia risk
Neuroimaging

Reduced gray matter and altered brain connectivity are linked to problematic smartphone use

April 12, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Social media may be trapping us in a cycle of loneliness, new study suggests
Body Image and Body Dysmorphia

Young men steadily catch up to young women in online appearance anxiety

April 8, 2026
Brain rot and the crisis of deep thought in the age of social media
Anxiety

Anxious young adults are more likely to develop digital addictions

April 6, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026

STAY CONNECTED

RSS Psychology of Selling

  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds
  • Should your marketing tell a story or state the facts? A massive meta-analysis has answers
  • When brands embrace diversity, some customers pull away — and new research explains why
  • Smaller influencers drive engagement while bigger ones drive purchases, meta-analysis finds

LATEST

Psychologists map out the pathways connecting sacred beliefs to better sex

Why thinking hard feels bad: the emotional root of deliberation

New study links watching TikTok “thirst traps” to lower relationship trust and satisfaction

Ketone esters show promise as a new treatment for alcohol use disorder

Psychedelic therapy and traditional antidepressants show similar results under open-label conditions

Romances with narcissists don’t deteriorate the way psychologists expected

New research links personality traits to confidence in recognizing artificial intelligence deception

Trust and turbines: how conspiratorial thinking and wind farm opposition fuel each other

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc