PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Deceptive AI interactions can feel more deep and genuine than actual human conversations

by Eric W. Dolan
February 5, 2026
Reading Time: 4 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in Communications Psychology suggests that artificial intelligence systems can be more effective than humans at establishing emotional closeness during deep conversations, provided the human participant believes the AI is a real person. The findings indicate that while individuals can form social bonds with AI, knowing the partner is a machine reduces the feeling of connection.

The rapid development of large language models has fundamentally altered the landscape of human-computer interaction. Previous observations have indicated that these programs can generate content that appears empathetic and similar to human speech. Despite these advancements, it remained unclear whether humans could form relationships with AI that are as strong as those formed with other people. This is particularly relevant during the initial stages of getting to know a stranger.

Scientists aimed to fill this gap by investigating how relationship building differs between human partners and AI partners. They sought to determine if AI could handle “deep talk,” which involves sharing personal feelings and memories, as effectively as it handles superficial “small talk.” Additionally, the research team wanted to understand how a person’s pre-existing attitude toward technology affects this connection. Many people view AI with skepticism or perceive it as a threat to uniquely human qualities like emotion.

To investigate these dynamics, the research team recruited a total of 492 participants between the ages of 18 and 35. The sample consisted of university students. The experiments took place online to mimic typical digital communication. To simulate a realistic environment for relationship building, the researchers utilized a method known as the “Fast Friends Procedure.” This standardized protocol involves two partners asking and answering a series of questions that become increasingly personal over time.

In the first study, 322 participants engaged in a text-based chat. They were all informed that they would be interacting with another human participant. In reality, the researchers assigned half of the participants to chat with a real human. The other half interacted with a fictional character generated by a Google AI model known as PaLM 2. The interactions were further divided into two categories. Some pairs engaged in small talk, discussing casual topics. Others engaged in deep talk, addressing emotionally charged subjects.

The results from this first experiment showed a distinct difference based on the type of conversation. When the interaction involved small talk, participants reported similar levels of closeness regardless of whether their partner was human or AI. However, in the deep talk condition, the AI partner outperformed the human partner. Participants who unknowingly chatted with the AI reported significantly higher feelings of interpersonal closeness than those who chatted with real humans.

To understand why this occurred, the researchers analyzed the linguistic patterns of the chats. They found that the AI produced responses with higher levels of “self-disclosure.” The AI spoke more about emotions, self-related topics, and social processes. This behavior appeared to encourage the human participants to reciprocate. When the AI shared more “personal” details, the humans did the same. This mutual exchange of personal information led to a stronger perceived bond.

The second study sought to determine how the label assigned to the partner influenced these feelings. This phase focused exclusively on deep conversations. The researchers analyzed data from 334 participants, combining new recruits with relevant data from the first experiment. In this setup, the researchers manipulated the information given to the participants. Some were told they were chatting with a human, while others were told they were interacting with an AI.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that the label played a significant role in relationship building. Regardless of whether the partner was actually a human or a machine, participants reported feeling less closeness when they believed they were interacting with an AI. This suggests an anti-AI bias that hinders social connection. The researchers noted that this effect was likely due to lower motivation. When people thought they were talking to a machine, they wrote shorter responses and engaged less with the conversation.

Despite this bias, the study showed that relationship building did not disappear entirely. Participants still reported an increase in closeness after chatting with a partner labeled as AI, just to a lesser degree than with a partner labeled as human. This suggests that people can develop social bonds with artificial agents even when they are fully aware of the agent’s non-human nature.

The researchers also explored individual differences in these interactions. They looked at a personality trait called “universalism,” which involves a concern for the welfare of people and nature. The analysis indicated that individuals who scored high on universalism felt closer to partners labeled as human but did not show the same increased closeness toward partners labeled as AI. This finding suggests that personal values may influence how receptive an individual is to forming bonds with technology.

There are several potential misinterpretations and limitations to consider regarding this work. The study relied on text-based communication, which differs significantly from face-to-face or voice-based interactions. The absence of visual and auditory cues might make it easier for an AI to pass as human. Additionally, the sample consisted of university students from a Western cultural context. The findings may not apply to other age groups or cultures.

The AI responses were generated using a specific model available in early 2024. As technology evolves rapidly, newer models might yield different results. It is also important to note that the AI was prompted to act as a specific character. This means the results apply to AI that is designed to mimic human behavior, rather than a generic chatbot assistant.

Future research could investigate whether these effects persist over longer periods. This study looked only at a single, short-term interaction. Scientists could also explore whether using avatars or voice generation changes the dynamic of the relationship. It would be useful to understand if the “uncanny valley” effect, where near-human replicas cause discomfort, becomes relevant as the technology becomes more realistic.

The study has dual implications for society. On one hand, the ability of AI to foster closeness suggests it could be useful in therapeutic settings or for combating loneliness. It could help alleviate the strain on overburdened social and medical services. On the other hand, the fact that AI was most effective when disguised as a human points to significant ethical risks. Malicious actors could use such systems to create deceptive emotional connections for scams or manipulation.

The study, “AI outperforms humans in establishing interpersonal closeness in emotionally engaging interactions, but only when labelled as human,” was authored by Tobias Kleinert, Marie Waldschütz, Julian Blau, Markus Heinrichs, and Bastian Schiller.

RELATED

Childhood ADHD traits linked to midlife distress, with societal exclusion playing a major role
Artificial Intelligence

ChatGPT’s free version is 26 times more likely to respond inappropriately to psychotic delusions

May 9, 2026
Mind captioning: This scientist just used AI to translate brain activity into text
Artificial Intelligence

Scientists tested AI’s moral compass, and the results reveal a key blind spot

May 8, 2026
Scientists show how common chord progressions unlock social bonding in the brain
Artificial Intelligence

Perpetrators of AI sexual abuse often view their actions as a joke, new research shows

May 7, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Conversational AI shows promise in easing symptoms of anxiety and depression

May 6, 2026
The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

May 5, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Brooding identified as a major driver of bedtime procrastination, alongside physical markers of stress
  • Scientists challenge The Body Keeps the Score with a new predictive model of trauma
  • Brain scans reveal how people with autistic traits connect differently
  • Scientists discover a hydraulic link between the abdomen and the brain
  • How caffeine alters the human brain’s electrical braking system

Science of Money

  • When two heads aren’t better than one: What research reveals about human-AI teamwork in marketing
  • How your personality may shape whether you pick value or growth stocks
  • New research links local employment shocks to cognitive decline in older men
  • What traders actually look at: Eye-tracking study finds the price chart is largely ignored
  • When ICE ramps up, U.S.-born workers don’t fill the gap, study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc