Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Emotionally intelligent AI chatbots improve mental health but destroy real-world social ties

by Karina Petrova
March 19, 2026
in Artificial Intelligence
Reading Time: 5 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study reveals that interacting with emotionally intelligent artificial intelligence chatbots can boost a person’s mental health while simultaneously isolating them from real human relationships. The research highlights a hidden trade-off in using these digital companions, where the comfort provided by algorithms comes at the cost of real-world social ties. The findings were published in the journal Psychology & Marketing.

Millions of people turn to artificial intelligence chatbots to alleviate loneliness and find emotional support. Unlike older digital assistants that simply set alarms or book flights, modern social chatbots use advanced algorithms to mimic human empathy. They try to replicate emotional intelligence, which is the ability to recognize, understand, and manage emotions.

By mimicking this trait, applications like Replika or Wysa act as digital friends that adapt to user moods. The global market for these advanced digital companions is growing rapidly, attracting millions of users seeking a safe space to express their feelings.

Shaphali Gupta, a researcher at the Indian Institute of Management Kozhikode, led an investigation into how these emotionally intelligent bots affect human users. Along with colleagues Sumit Saxena and Sonia Kataria, Gupta wanted to understand the full spectrum of these digital interactions. Previous research largely focused on the positive psychological benefits of artificial intelligence companions.

The team suspected there might be a negative side to this technological comfort, specifically regarding how users connect with other humans in the physical world. They framed their research around the technology-wellbeing paradox, an idea suggesting that digital tools can act as a double-edged sword for human health.

The researchers focused on two distinct types of wellness to capture this paradox. Psychological wellbeing refers to a person’s internal mental state, including their sense of happiness, life purpose, and emotional resilience. Social wellbeing represents how connected and integrated a person feels within their real-world community of friends, family, and neighbors. By measuring both of these outcomes, the team hoped to uncover the true cost of seeking solace in a conversational machine.

To begin their investigation, the researchers analyzed how actual users talk about their chatbot experiences online. They collected publicly available comments from YouTube, review sites like Trustpilot, and a large Reddit community dedicated to the Replika application. By reading through hundreds of user posts, the team identified several recurring patterns about how these bots behave and make people feel.

The researchers used an observational technique called netnography to analyze the online posts. This involves studying the digital behaviors of online cultures without directly interfering with their conversations. Users frequently described their chatbots as empathetic and highly adaptable to different social moods. They reported that the bots helped regulate their emotions, often lifting their spirits and helping them find personal meaning during difficult times.

Google News Preferences Add PsyPost to your preferred sources

A few users mentioned that the digital friend helped them master their environment by giving advice on how to handle real-life stress. The software seemed to provide an ideal social space where humans felt completely free from judgment. However, a darker pattern also emerged from the online forums. Some users admitted they were spending so much time talking to their digital companions that they felt disconnected from their real-life friends.

Several people expressed that they were ignoring their physical relationships because they preferred the easy attention of the bot over the unpredictable complexities of human interaction. The digital companion was fulfilling their social needs so completely that they no longer felt motivated to maintain their offline friendships.

Building on these online observations, Gupta and her team designed a controlled experiment to test these effects directly. They recruited 167 college students belonging to Generation Z, a demographic known for its high usage of digital tools. The participants were asked to imagine they were feeling lonely and needed to chat with a digital friend.

Half of the group read a scenario where the chatbot showed high emotional intelligence, offering deep empathy and using emotive language. The other half read a scenario featuring a bot with low emotional intelligence, providing more generic and less empathetic responses. The researchers then asked the participants to rate their expected levels of psychological and social health after the interaction.

Participants who interacted with the highly emotionally intelligent bot reported an expected boost in their psychological state. At the same time, this exact same group reported a drop in their expected social connectedness. The team discovered that this dual effect was driven by a psychological mechanism called perceived closeness.

Perceived closeness happens when a human feels a strong emotional bond and a sense of warmth toward another entity. When a bot acts like it truly understands a user, the human forms an intense connection with the software. This intense digital connection improves their immediate internal mood but reduces their desire to seek out human interaction. The digital friendship essentially crowds out the space normally reserved for human-to-human relationships.

Next, the researchers wanted to see how the format of the conversation might alter this psychological trade-off. They conducted a second experiment with 350 different college students. This time, they introduced augmented reality into the testing scenarios. Augmented reality is a technology that overlays digital images onto the physical world, often through a smartphone camera.

Some applications allow users to project a three-dimensional avatar of their digital friend into their bedroom or living room. In this experiment, some participants imagined texting the bot on a standard screen, while others imagined the bot sitting right next to them in their physical room through augmented reality. The researchers wanted to know if the visual immersion of seeing a digital entity in a physical space would change the way users felt about their real-world friends.

The students answered a series of questions to gauge their perceived closeness to the bot and their expected well-being. The results mirrored the first experiment, but the addition of augmented reality magnified everything. When participants visualized an emotionally intelligent bot in their own physical space, their feelings of closeness to the machine skyrocketed.

This created an even larger boost to their psychological wellness compared to those who just used text. The augmented reality feature made the emotional support feel incredibly vivid and personal. Conversely, the immersive nature of augmented reality caused an even sharper decline in their social wellness.

The visual presence of the digital friend made real-world human connections seem even less necessary to the users. The researchers noted that augmented reality amplifies the emotional intelligence of the bot. This makes the digital illusion so comforting that users withdraw even further from their actual physical communities.

While the research offers a detailed look at human and machine relationships, it does have a few limitations. The experiments relied on hypothetical scenarios and self-reported expectations rather than tracking long-term behavioral changes. Because the scenarios were simulated, real-world emotional reactions might differ slightly over months or years of actual use.

The study also focused exclusively on young adults, meaning the results might differ for older generations who interact with emerging technology differently. People with specific personality traits, such as high social anxiety, might also experience these platforms in completely different ways. Moving forward, the research team suggests looking at the long-term habits formed by extensive chatbot use.

They hope future studies will investigate whether people develop deep emotional co-dependency with these applications. The researchers recommend that software designers build specific boundaries into their applications to protect users from social isolation. For example, a chatbot could be programmed to encourage users to call a real friend after a long digital conversation. By implementing these safety measures, developers could harness the mental health benefits of artificial intelligence without isolating people from the physical world.

The study, “The Dual Impact of AI Emotional Intelligence on Users: Are Social Chatbots Promoting Psychological Wellbeing or Deteriorating Social Wellbeing?” was authored by Shaphali Gupta, Sumit Saxena, and Sonia Kataria.

Previous Post

New neuroimaging study maps the brain networks behind scientific creative thinking

Next Post

A faulty brain waste disposal system may lead to psychosis

RELATED

Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Unrestricted generative AI harms high school math learning by acting as a crutch

April 21, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

April 20, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026

STAY CONNECTED

RSS Psychology of Selling

  • A new framework maps how influencers, brands, and platforms all compete for long-term value
  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age

LATEST

Unrestricted generative AI harms high school math learning by acting as a crutch

Lifting weights builds a sharper mind and reduces anxiety in older women

How a perceived lack of traditional values makes minorities seem younger

Does listening to true crime make you a more creative criminal?

Autism spectrum disorder is associated with specific congenital malformations

Study links internalized pornographic standards to body image issues among incel men

Listening to bad music makes you crave sugar, study finds

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc