Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Why would shoppers prefer chatbots to humans? New study pinpoints a key factor

by Eric W. Dolan
June 26, 2024
in Artificial Intelligence, Business
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Don't miss out! Follow PsyPost on Bluesky!

Technological advancements are revolutionizing customer service interactions, with firms increasingly relying on chatbots—automated virtual agents that can simulate human conversation. While many people generally prefer interacting with human customer service agents, a new study reveals an interesting twist: when consumers feel embarrassed about their purchases, they actually prefer dealing with chatbots. The study was published in the Journal of Consumer Psychology.

The primary aim of the study was to understand how consumers’ concerns about self-presentation—essentially, their worries about being judged by others—affect their interactions with chatbots compared to human customer service agents. Lead researcher Jianna Jin, an assistant professor at the University of Notre Dame, and her colleagues wanted to explore whether chatbots could mitigate feelings of embarrassment in online shopping scenarios. This inquiry was particularly relevant as chatbots, with their ambiguous or disclosed identities, become more prevalent in the digital marketplace.

The researchers conducted a series of five studies to understand consumer preferences when dealing with chatbots versus human agents in contexts likely to elicit embarrassment. Participants were recruited from Amazon Mechanical Turk and other platforms.

Study 1 involved 403 participants who were asked to imagine buying a personal lubricant from an online store. They interacted with an ambiguous chat agent, meaning the agent’s identity as either human or chatbot was not disclosed. The participants then had to infer the agent’s identity and complete a measure of self-presentation concerns related to sex-related topics.

The results showed that participants with higher self-presentation concerns were more likely to infer that the ambiguous chat agent was human. This finding suggested that in situations where people felt anxious about how they were perceived, they tended to err on the side of caution, assuming the agent might be human to prepare themselves for potential embarrassment.

Study 2 expanded on these findings by comparing reactions to different product categories. Here, 795 female participants imagined purchasing either a personal lubricant or body lotion from an online store and interacted with the same ambiguous chat agent as in Study 1. The study aimed to see if the type of product influenced their perception of the chat agent’s identity.

As predicted, participants inferred the agent to be human more frequently when shopping for personal lubricant compared to body lotion. This demonstrated that the nature of the product could activate self-presentation concerns, affecting how consumers perceive and interact with customer service agents.

Study 3 shifted the focus to clearly identified chatbots and human agents. A large sample of 1,501 participants was asked to imagine buying antidiarrheal medication and interacted with either a non-anthropomorphized chatbot (a chatbot without human-like features), an anthropomorphized chatbot (a chatbot with human-like features), or a human service rep.

Participants showed a higher willingness to engage with the non-anthropomorphized chatbot compared to the human agent, particularly when the purchase context involved potential embarrassment. However, this preference diminished when the chatbot was anthropomorphized, indicating that giving chatbots human-like qualities can make consumers feel similarly judged as they would by human agents.

Study 4 delved deeper into how self-presentation concerns influenced perceptions of a clearly identified anthropomorphized chatbot versus a human agent. Participants were asked to imagine purchasing a personal lubricant and rated the chatbot or human agent on perceived experience (the capacity to feel emotions and have consciousness).

Those with higher self-presentation concerns ascribed more experience to the anthropomorphized chatbot, despite knowing it was not human. This finding suggested that anthropomorphism introduces ambiguity about a chatbot’s human-like qualities, affecting consumer comfort levels.

Studies 5a and 5b involved real interactions with chatbots. In Study 5a, 386 undergraduate students were asked to choose between two online stores, one with a human service agent and one with a chatbot, for purchasing either antidiarrheal or hay fever medication. Participants preferred the chatbot store for the embarrassing product (antidiarrheal medication) and the human store for the non-embarrassing product (hay fever medication). This choice was mediated by feelings of embarrassment, as indicated by participants’ spontaneous explanations.

Study 5b involved 595 participants interacting with a real chatbot about skincare concerns. Participants were more willing to provide their email addresses to the chatbot than to a human agent, a behavior mediated by reduced feelings of embarrassment when interacting with the chatbot.

“In general, research shows people would rather interact with a human customer service agent than a chatbot,” said Jin, who led the study as a doctoral student at Ohio State’s Fisher College of Business. “But we found that when people are worried about others judging them, that tendency reverses and they would rather interact with a chatbot because they feel less embarrassed dealing with a chatbot than a human.”

While the study offers significant insights, it has some limitations. The reliance on self-reported measures and hypothetical scenarios in some of the studies may not fully capture real-world behaviors. Additionally, the focus was mainly on specific embarrassing product categories, which may not generalize to all types of products or services.

Nevertheless, the findings have some practical implications. Companies should consider these findings when designing their customer service strategies, especially for products that might cause consumers to feel self-conscious. By clearly identifying chatbots and avoiding excessive anthropomorphism, businesses can improve customer comfort and engagement.

“Chatbots are becoming more and more common as customer service agents, and companies are not required in most states to disclose if they use them,” said co-author Rebecca Walker Reczek, a professor at Ohio State’s Fisher College. “But it may be important for companies to let consumers know if they’re dealing with a chatbot.”

The study, “Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self-presentation concerns,” was authored by Jianna Jin, Jesse Walker, and Rebecca Walker Reczek.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Could creatine slow cognitive decline? Mouse study reveals promising effects on brain aging

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

Frequent dreams and nightmares surged worldwide during the COVID-19 pandemic

Vagus nerve signals influence food intake more in higher socio-economic groups

People who think “everyone agrees with me” are more likely to support populism

What is the most attractive body fat percentage for men? New research offers an answer

Longer antidepressant use linked to more severe, long-lasting withdrawal symptoms, study finds

New psychology study sheds light on mysterious “feelings of presence” during isolation

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy