Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Deceptive AI interactions can feel more deep and genuine than actual human conversations

by Eric W. Dolan
February 5, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in Communications Psychology suggests that artificial intelligence systems can be more effective than humans at establishing emotional closeness during deep conversations, provided the human participant believes the AI is a real person. The findings indicate that while individuals can form social bonds with AI, knowing the partner is a machine reduces the feeling of connection.

The rapid development of large language models has fundamentally altered the landscape of human-computer interaction. Previous observations have indicated that these programs can generate content that appears empathetic and similar to human speech. Despite these advancements, it remained unclear whether humans could form relationships with AI that are as strong as those formed with other people. This is particularly relevant during the initial stages of getting to know a stranger.

Scientists aimed to fill this gap by investigating how relationship building differs between human partners and AI partners. They sought to determine if AI could handle “deep talk,” which involves sharing personal feelings and memories, as effectively as it handles superficial “small talk.” Additionally, the research team wanted to understand how a person’s pre-existing attitude toward technology affects this connection. Many people view AI with skepticism or perceive it as a threat to uniquely human qualities like emotion.

To investigate these dynamics, the research team recruited a total of 492 participants between the ages of 18 and 35. The sample consisted of university students. The experiments took place online to mimic typical digital communication. To simulate a realistic environment for relationship building, the researchers utilized a method known as the “Fast Friends Procedure.” This standardized protocol involves two partners asking and answering a series of questions that become increasingly personal over time.

In the first study, 322 participants engaged in a text-based chat. They were all informed that they would be interacting with another human participant. In reality, the researchers assigned half of the participants to chat with a real human. The other half interacted with a fictional character generated by a Google AI model known as PaLM 2. The interactions were further divided into two categories. Some pairs engaged in small talk, discussing casual topics. Others engaged in deep talk, addressing emotionally charged subjects.

The results from this first experiment showed a distinct difference based on the type of conversation. When the interaction involved small talk, participants reported similar levels of closeness regardless of whether their partner was human or AI. However, in the deep talk condition, the AI partner outperformed the human partner. Participants who unknowingly chatted with the AI reported significantly higher feelings of interpersonal closeness than those who chatted with real humans.

To understand why this occurred, the researchers analyzed the linguistic patterns of the chats. They found that the AI produced responses with higher levels of “self-disclosure.” The AI spoke more about emotions, self-related topics, and social processes. This behavior appeared to encourage the human participants to reciprocate. When the AI shared more “personal” details, the humans did the same. This mutual exchange of personal information led to a stronger perceived bond.

The second study sought to determine how the label assigned to the partner influenced these feelings. This phase focused exclusively on deep conversations. The researchers analyzed data from 334 participants, combining new recruits with relevant data from the first experiment. In this setup, the researchers manipulated the information given to the participants. Some were told they were chatting with a human, while others were told they were interacting with an AI.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that the label played a significant role in relationship building. Regardless of whether the partner was actually a human or a machine, participants reported feeling less closeness when they believed they were interacting with an AI. This suggests an anti-AI bias that hinders social connection. The researchers noted that this effect was likely due to lower motivation. When people thought they were talking to a machine, they wrote shorter responses and engaged less with the conversation.

Despite this bias, the study showed that relationship building did not disappear entirely. Participants still reported an increase in closeness after chatting with a partner labeled as AI, just to a lesser degree than with a partner labeled as human. This suggests that people can develop social bonds with artificial agents even when they are fully aware of the agent’s non-human nature.

The researchers also explored individual differences in these interactions. They looked at a personality trait called “universalism,” which involves a concern for the welfare of people and nature. The analysis indicated that individuals who scored high on universalism felt closer to partners labeled as human but did not show the same increased closeness toward partners labeled as AI. This finding suggests that personal values may influence how receptive an individual is to forming bonds with technology.

There are several potential misinterpretations and limitations to consider regarding this work. The study relied on text-based communication, which differs significantly from face-to-face or voice-based interactions. The absence of visual and auditory cues might make it easier for an AI to pass as human. Additionally, the sample consisted of university students from a Western cultural context. The findings may not apply to other age groups or cultures.

The AI responses were generated using a specific model available in early 2024. As technology evolves rapidly, newer models might yield different results. It is also important to note that the AI was prompted to act as a specific character. This means the results apply to AI that is designed to mimic human behavior, rather than a generic chatbot assistant.

Future research could investigate whether these effects persist over longer periods. This study looked only at a single, short-term interaction. Scientists could also explore whether using avatars or voice generation changes the dynamic of the relationship. It would be useful to understand if the “uncanny valley” effect, where near-human replicas cause discomfort, becomes relevant as the technology becomes more realistic.

The study has dual implications for society. On one hand, the ability of AI to foster closeness suggests it could be useful in therapeutic settings or for combating loneliness. It could help alleviate the strain on overburdened social and medical services. On the other hand, the fact that AI was most effective when disguised as a human points to significant ethical risks. Malicious actors could use such systems to create deceptive emotional connections for scams or manipulation.

The study, “AI outperforms humans in establishing interpersonal closeness in emotionally engaging interactions, but only when labelled as human,” was authored by Tobias Kleinert, Marie Waldschütz, Julian Blau, Markus Heinrichs, and Bastian Schiller.

RELATED

How AI’s distorted body ideals could contribute to body dysmorphia
Artificial Intelligence

How AI’s distorted body ideals could contribute to body dysmorphia

January 28, 2026
New psychology research finds romantic cues reduce self-control and increase risky behavior
Artificial Intelligence

Machine learning identifies brain patterns that predict antidepressant success

January 25, 2026
Genetic factors likely confound the link between c-sections and offspring mental health
Addiction

AI identifies behavioral traits that predict alcohol preference during adolescence

January 24, 2026
Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

A simple language switch can make AI models behave significantly differently

January 23, 2026
LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques
Artificial Intelligence

Are you suffering from “cognitive atrophy” due to AI overuse?

January 22, 2026
Scientists reveal atypical depression is a distinct biological subtype linked to antidepressant resistance
Artificial Intelligence

Researchers are using Dungeons & Dragons to find the breaking points of major AI models

January 22, 2026
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

AI chatbots tend to overdiagnose mental health conditions when used without structured guidance

January 22, 2026
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

January 19, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Divorce history is not linked to signs of brain aging or dementia markers

Infants fed to sleep at 2 months wake up more often at 6 months

Eye contact discomfort does not explain slower emotion recognition in autistic individuals

A high-sugar breakfast may trigger a “rest and digest” state that dampens cognitive focus

Neuroscientists reveal how jazz improvisation shifts brain activity

A new experiment reveals an unexpected shift in how pregnant women handle intimidation

Trump-related search activity signals a surprising trend in the stock market

A new mouse model links cleared viral infections to ALS-like symptoms

RSS Psychology of Selling

  • Sales agents often stay for autonomy rather than financial rewards
  • The economics of emotion: Reassessing the link between happiness and spending
  • Surprising link found between greed and poor work results among salespeople
  • Intrinsic motivation drives sales performance better than financial rewards
  • New research links faking emotions to higher turnover in B2B sales
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy