Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Study uncovers a surprising discrepancy between the effects of perceived vs. actual use of AI in conversations

by Eric W. Dolan
May 9, 2023
in Artificial Intelligence, Social Psychology
Share on TwitterShare on Facebook

New research published in Scientific Reports provides insight into how artificial intelligence can impact human communication. The findings indicate that people tend to use more positive language and perceive each other more positively when using an AI-based chat tool. However, people tend to evaluate others more negatively if they suspect that AI-generated “smart replies” are being used in the conversation.

“In tech and research on tech, the primary focus is almost always on the ‘user’ and how the technology impacts or helps the user,” said study author Malte F. Jung, an associate professor at Cornell University and the Nancy H. ’62 and Philip M. ’62 Young Sesquicentennial Faculty Fellow.

“This frustrates me because we often ignore how technology impacts others we live and work with. My early research (e.g. this one) on teams taught me how important our interpersonal interactions are. I want to bring that understanding to the way we design and study technology.”

“When Google released the first smart reply enabled messenger (Google Allo), I was excited to study it because it was one of the first widely deployed AI-based systems that inserted itself directly into our interactions with others,” Jung explained.

“Shifting the focus on how technology impacts our interactions with people is important. Almost anything we are able to achieve at home and at work rests on the quality of our interactions with others. More and more AI systems insert themselves into the very way we interact with others, and we don’t really understand the impact these systems have on interactions.”

“My students and I have spent the past ten years trying to understand how different technologies (including social robots, AI language tools, surgical robots, other machines) impact the way people interact and communicate with each other,” Jung told PsyPost. “We find over and over again that all these systems have a social impact far beyond the person directly interacting with the system. There is a ripple effect. We need to understand that.”

Jung and his colleagues conducted two randomized experiments to study the interpersonal consequences of using AI-generated smart replies during real-time text-based communication. Using the Google Reply API, the researchers developed a custom messaging application that allowed them to control which AI-generated replies were displayed during the conversations. Smart replies like these are generated from large language models (LLMs) to predict plausible next responses.

For the first study, the researchers randomly assigned pairs of participants to have smart replies available or not available during a conversation about a policy issue. This resulted in four conditions: both participants can use smart replies, only one partner can use smart replies, only the other partner can use smart replies, or neither participant can use smart replies.

Google News Preferences Add PsyPost to your preferred sources

The participants were recruited via Amazon’s Mechanical Turk and asked to come to an agreement on the “top 3 changes that Mechanical Turk could make to better handle unfairly rejected work.” The final sample included 361 participants and the conversations lasted for approximately 7 minutes on average.

After the conversation, the researchers measured several outcomes using self-report surveys, including perceived dominance, affiliation, cooperative communication, and perceived smart reply use. The researchers also measured communication speed and messaging sentiment using computational tools.

In the second study, 291 pairs of participants were randomly assigned to one of four conditions: (1) Both participants receive smart Google replies, (2) both participants receive smart replies with positive sentiment, (3) both participants receive smart replies with negative sentiment, or (4) no smart replies are provided. The researchers measured messaging sentiment to assess the impact of smart replies, as was done in study 1.

Jung and his colleagues found that smart replies led to faster communication, with 10.2% more messages sent per minute. On average, smart replies accounted for 14.3% of sent messages. Participants were able to detect if their communication partner was using smart replies better than chance. However, this correlation was not very strong, meaning that participants were not always accurate in detecting if their partner was using smart replies or not.

But there was an interesting discrepancy between the perceived use of AI and actual use of AI. People who thought their partner was using smart replies rated them as less cooperative, less affiliative, and more dominant (even if they weren’t actually using them). However, the actual use of smart replies by the partner improved rating of the partner’s cooperation and sense of affiliation.

“My key takeaway would be: AI tools have social side effects. They can change the way we build and maintain social relationships with others,” Jung told PsyPost.

“AI tools such as smart replies or ChatGPT are marketed with the promise to save time and boost productivity. What’s left out is that they impact how we interact with and relate to others. For example, Google markets its smart reply API with the promise to help “users respond to messages quickly, and makes it easier to reply to messages on devices with limited input capabilities.” Yet, we found that it changes our language and how we are perceived by others.”

“The Wall Street Journal published an article describing similar side effects such as the ones we found for ChatGPT and how its use for communication changes how we are perceived (often negatively),” Jung added.

As expected, the researchers found that the availability of negative smart replies caused conversations to have more negative emotional content than conversations with positive smart replies or no smart replies at all. These changes in the language used in conversations were driven by people’s use of smart replies, rather than just exposure to them.

“By the time we ran our final studies what we saw did not really surprise us anymore. In part also because we had found similar ‘social side effects’ from other technologies we studied over the years,” Jung told PsyPost.

“What was interesting to see though, was how the interactions directly corresponded with the linguistic properties of the smart replies they were given. Smart replies with a positive tone led to more positive interactions than negative smart replies. This suggests that whoever has control over the smart replies has a certain degree of control over the interactions we have with others.”

But there is still much to learn about the impact of AI on social interactions.

“As with any study, our study raises more questions than it answers,” Jung explained. “Some of the questions that I am most fascinated in are about the larger scale social implications of such AI systems. E.g. how do systems like predictive text and ChatGPT impact diversity in language use? What social norms will people adopt to navigate the benefits and risks of using these systems? How do we design autonomous systems with a better understanding of their social implications?”

The study, “Artificial intelligence in communication impacts language and social relationships“, was authored by Jess Hohenstein, Rene F. Kizilcec, Dominic DiFranzo, Zhila Aghajari, Hannah Mieczkowski, Karen Levy, Mor Naaman, Jeffrey Hancock, and Malte F. Jung.

RELATED

Artificial intelligence predicts adolescent mental health risk before symptoms emerge
Artificial Intelligence

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

February 7, 2026
The surprising way the brain’s dopamine-rich reward center adapts as a romance matures
Neuroimaging

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

February 7, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

February 7, 2026
Support for banning hate speech tends to decrease as people get older
Political Psychology

Support for banning hate speech tends to decrease as people get older

February 6, 2026
New psychology research changes how we think about power in the bedroom
Relationships and Sexual Health

New psychology research changes how we think about power in the bedroom

February 6, 2026
Sorting Hat research: What does your Hogwarts house say about your psychological makeup?
Relationships and Sexual Health

This behavior explains why emotionally intelligent couples are happier

February 6, 2026
Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

Deceptive AI interactions can feel more deep and genuine than actual human conversations

February 5, 2026
A new experiment reveals an unexpected shift in how pregnant women handle intimidation
Evolutionary Psychology

A new experiment reveals an unexpected shift in how pregnant women handle intimidation

February 5, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Evolutionary psychology’s “macho” face ratio theory has a major flaw

Reduction in PTSD symptoms linked to better cognitive performance in new study of veterans

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

Self-kindness leads to a psychologically rich life for teenagers, new research suggests

Borderline personality disorder in youth linked to altered brain activation during self-identity processing

Biological sex influences how blood markers reflect Alzheimer’s severity

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

The scientist who predicted AI psychosis has issued another dire warning

RSS Psychology of Selling

  • Sales agents often stay for autonomy rather than financial rewards
  • The economics of emotion: Reassessing the link between happiness and spending
  • Surprising link found between greed and poor work results among salespeople
  • Intrinsic motivation drives sales performance better than financial rewards
  • New research links faking emotions to higher turnover in B2B sales
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy