A new study published in Communications Psychology suggests that artificial intelligence systems can be more effective than humans at establishing emotional closeness during deep conversations, provided the human participant believes the AI is a real person. The findings indicate that while individuals can form social bonds with AI, knowing the partner is a machine reduces the feeling of connection.
The rapid development of large language models has fundamentally altered the landscape of human-computer interaction. Previous observations have indicated that these programs can generate content that appears empathetic and similar to human speech. Despite these advancements, it remained unclear whether humans could form relationships with AI that are as strong as those formed with other people. This is particularly relevant during the initial stages of getting to know a stranger.
Scientists aimed to fill this gap by investigating how relationship building differs between human partners and AI partners. They sought to determine if AI could handle “deep talk,” which involves sharing personal feelings and memories, as effectively as it handles superficial “small talk.” Additionally, the research team wanted to understand how a person’s pre-existing attitude toward technology affects this connection. Many people view AI with skepticism or perceive it as a threat to uniquely human qualities like emotion.
To investigate these dynamics, the research team recruited a total of 492 participants between the ages of 18 and 35. The sample consisted of university students. The experiments took place online to mimic typical digital communication. To simulate a realistic environment for relationship building, the researchers utilized a method known as the “Fast Friends Procedure.” This standardized protocol involves two partners asking and answering a series of questions that become increasingly personal over time.
In the first study, 322 participants engaged in a text-based chat. They were all informed that they would be interacting with another human participant. In reality, the researchers assigned half of the participants to chat with a real human. The other half interacted with a fictional character generated by a Google AI model known as PaLM 2. The interactions were further divided into two categories. Some pairs engaged in small talk, discussing casual topics. Others engaged in deep talk, addressing emotionally charged subjects.
The results from this first experiment showed a distinct difference based on the type of conversation. When the interaction involved small talk, participants reported similar levels of closeness regardless of whether their partner was human or AI. However, in the deep talk condition, the AI partner outperformed the human partner. Participants who unknowingly chatted with the AI reported significantly higher feelings of interpersonal closeness than those who chatted with real humans.
To understand why this occurred, the researchers analyzed the linguistic patterns of the chats. They found that the AI produced responses with higher levels of “self-disclosure.” The AI spoke more about emotions, self-related topics, and social processes. This behavior appeared to encourage the human participants to reciprocate. When the AI shared more “personal” details, the humans did the same. This mutual exchange of personal information led to a stronger perceived bond.
The second study sought to determine how the label assigned to the partner influenced these feelings. This phase focused exclusively on deep conversations. The researchers analyzed data from 334 participants, combining new recruits with relevant data from the first experiment. In this setup, the researchers manipulated the information given to the participants. Some were told they were chatting with a human, while others were told they were interacting with an AI.
The researchers found that the label played a significant role in relationship building. Regardless of whether the partner was actually a human or a machine, participants reported feeling less closeness when they believed they were interacting with an AI. This suggests an anti-AI bias that hinders social connection. The researchers noted that this effect was likely due to lower motivation. When people thought they were talking to a machine, they wrote shorter responses and engaged less with the conversation.
Despite this bias, the study showed that relationship building did not disappear entirely. Participants still reported an increase in closeness after chatting with a partner labeled as AI, just to a lesser degree than with a partner labeled as human. This suggests that people can develop social bonds with artificial agents even when they are fully aware of the agent’s non-human nature.
The researchers also explored individual differences in these interactions. They looked at a personality trait called “universalism,” which involves a concern for the welfare of people and nature. The analysis indicated that individuals who scored high on universalism felt closer to partners labeled as human but did not show the same increased closeness toward partners labeled as AI. This finding suggests that personal values may influence how receptive an individual is to forming bonds with technology.
There are several potential misinterpretations and limitations to consider regarding this work. The study relied on text-based communication, which differs significantly from face-to-face or voice-based interactions. The absence of visual and auditory cues might make it easier for an AI to pass as human. Additionally, the sample consisted of university students from a Western cultural context. The findings may not apply to other age groups or cultures.
The AI responses were generated using a specific model available in early 2024. As technology evolves rapidly, newer models might yield different results. It is also important to note that the AI was prompted to act as a specific character. This means the results apply to AI that is designed to mimic human behavior, rather than a generic chatbot assistant.
Future research could investigate whether these effects persist over longer periods. This study looked only at a single, short-term interaction. Scientists could also explore whether using avatars or voice generation changes the dynamic of the relationship. It would be useful to understand if the “uncanny valley” effect, where near-human replicas cause discomfort, becomes relevant as the technology becomes more realistic.
The study has dual implications for society. On one hand, the ability of AI to foster closeness suggests it could be useful in therapeutic settings or for combating loneliness. It could help alleviate the strain on overburdened social and medical services. On the other hand, the fact that AI was most effective when disguised as a human points to significant ethical risks. Malicious actors could use such systems to create deceptive emotional connections for scams or manipulation.
The study, “AI outperforms humans in establishing interpersonal closeness in emotionally engaging interactions, but only when labelled as human,” was authored by Tobias Kleinert, Marie Waldschütz, Julian Blau, Markus Heinrichs, and Bastian Schiller.