Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Why would shoppers prefer chatbots to humans? New study pinpoints a key factor

by Eric W. Dolan
June 26, 2024
in Artificial Intelligence, Business
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Follow PsyPost on Google News

Technological advancements are revolutionizing customer service interactions, with firms increasingly relying on chatbots—automated virtual agents that can simulate human conversation. While many people generally prefer interacting with human customer service agents, a new study reveals an interesting twist: when consumers feel embarrassed about their purchases, they actually prefer dealing with chatbots. The study was published in the Journal of Consumer Psychology.

The primary aim of the study was to understand how consumers’ concerns about self-presentation—essentially, their worries about being judged by others—affect their interactions with chatbots compared to human customer service agents. Lead researcher Jianna Jin, an assistant professor at the University of Notre Dame, and her colleagues wanted to explore whether chatbots could mitigate feelings of embarrassment in online shopping scenarios. This inquiry was particularly relevant as chatbots, with their ambiguous or disclosed identities, become more prevalent in the digital marketplace.

The researchers conducted a series of five studies to understand consumer preferences when dealing with chatbots versus human agents in contexts likely to elicit embarrassment. Participants were recruited from Amazon Mechanical Turk and other platforms.

Study 1 involved 403 participants who were asked to imagine buying a personal lubricant from an online store. They interacted with an ambiguous chat agent, meaning the agent’s identity as either human or chatbot was not disclosed. The participants then had to infer the agent’s identity and complete a measure of self-presentation concerns related to sex-related topics.

The results showed that participants with higher self-presentation concerns were more likely to infer that the ambiguous chat agent was human. This finding suggested that in situations where people felt anxious about how they were perceived, they tended to err on the side of caution, assuming the agent might be human to prepare themselves for potential embarrassment.

Study 2 expanded on these findings by comparing reactions to different product categories. Here, 795 female participants imagined purchasing either a personal lubricant or body lotion from an online store and interacted with the same ambiguous chat agent as in Study 1. The study aimed to see if the type of product influenced their perception of the chat agent’s identity.

As predicted, participants inferred the agent to be human more frequently when shopping for personal lubricant compared to body lotion. This demonstrated that the nature of the product could activate self-presentation concerns, affecting how consumers perceive and interact with customer service agents.

Study 3 shifted the focus to clearly identified chatbots and human agents. A large sample of 1,501 participants was asked to imagine buying antidiarrheal medication and interacted with either a non-anthropomorphized chatbot (a chatbot without human-like features), an anthropomorphized chatbot (a chatbot with human-like features), or a human service rep.

Participants showed a higher willingness to engage with the non-anthropomorphized chatbot compared to the human agent, particularly when the purchase context involved potential embarrassment. However, this preference diminished when the chatbot was anthropomorphized, indicating that giving chatbots human-like qualities can make consumers feel similarly judged as they would by human agents.

Study 4 delved deeper into how self-presentation concerns influenced perceptions of a clearly identified anthropomorphized chatbot versus a human agent. Participants were asked to imagine purchasing a personal lubricant and rated the chatbot or human agent on perceived experience (the capacity to feel emotions and have consciousness).

Those with higher self-presentation concerns ascribed more experience to the anthropomorphized chatbot, despite knowing it was not human. This finding suggested that anthropomorphism introduces ambiguity about a chatbot’s human-like qualities, affecting consumer comfort levels.

Studies 5a and 5b involved real interactions with chatbots. In Study 5a, 386 undergraduate students were asked to choose between two online stores, one with a human service agent and one with a chatbot, for purchasing either antidiarrheal or hay fever medication. Participants preferred the chatbot store for the embarrassing product (antidiarrheal medication) and the human store for the non-embarrassing product (hay fever medication). This choice was mediated by feelings of embarrassment, as indicated by participants’ spontaneous explanations.

Study 5b involved 595 participants interacting with a real chatbot about skincare concerns. Participants were more willing to provide their email addresses to the chatbot than to a human agent, a behavior mediated by reduced feelings of embarrassment when interacting with the chatbot.

“In general, research shows people would rather interact with a human customer service agent than a chatbot,” said Jin, who led the study as a doctoral student at Ohio State’s Fisher College of Business. “But we found that when people are worried about others judging them, that tendency reverses and they would rather interact with a chatbot because they feel less embarrassed dealing with a chatbot than a human.”

While the study offers significant insights, it has some limitations. The reliance on self-reported measures and hypothetical scenarios in some of the studies may not fully capture real-world behaviors. Additionally, the focus was mainly on specific embarrassing product categories, which may not generalize to all types of products or services.

Nevertheless, the findings have some practical implications. Companies should consider these findings when designing their customer service strategies, especially for products that might cause consumers to feel self-conscious. By clearly identifying chatbots and avoiding excessive anthropomorphism, businesses can improve customer comfort and engagement.

“Chatbots are becoming more and more common as customer service agents, and companies are not required in most states to disclose if they use them,” said co-author Rebecca Walker Reczek, a professor at Ohio State’s Fisher College. “But it may be important for companies to let consumers know if they’re dealing with a chatbot.”

The study, “Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self-presentation concerns,” was authored by Jianna Jin, Jesse Walker, and Rebecca Walker Reczek.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

New psychology study: Inner reasons for seeking romance are a top predictor of finding it
Artificial Intelligence

Scientists demonstrate that “AI’s superhuman persuasiveness is already a reality”

July 18, 2025

A recent study reveals that AI is not just a capable debater but a superior one. When personalized, ChatGPT's arguments were over 64% more likely to sway opinions than a human's, a significant and potentially concerning leap in persuasive capability.

Read moreDetails
Trump’s speeches stump AI: Study reveals ChatGPT’s struggle with metaphors
Artificial Intelligence

Trump’s speeches stump AI: Study reveals ChatGPT’s struggle with metaphors

July 15, 2025

Can an AI understand a political metaphor? Researchers pitted ChatGPT against the speeches of Donald Trump to find out. The model showed moderate success in detection but ultimately struggled with context, highlighting the current limits of automated language analysis.

Read moreDetails
Daughters who feel more attractive report stronger, more protective bonds with their fathers
Artificial Intelligence

People who use AI may pay a social price, according to new psychology research

July 14, 2025

Worried that using AI tools like ChatGPT at work makes you look lazy? New research suggests you might be right. A study finds employees who use AI are often judged more harshly, facing negative perceptions about their competence and effort.

Read moreDetails
Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Key Alzheimer’s protein found at astonishingly high levels in healthy newborns

People’s ideal leader isn’t hyper-masculine — new study shows preference for androgynous traits

Chronic pain rewires how the brain processes punishment, new research suggests

Common antidepressants and anti-anxiety drugs tied to major shifts in gut microbiome composition

New psychology study: Inner reasons for seeking romance are a top predictor of finding it

Scientists demonstrate that “AI’s superhuman persuasiveness is already a reality”

Cannabis alternative 9(R)-HHC may be as potent as THC, study in mice suggests

A single dose of lamotrigine causes subtle changes in emotional memory

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy