Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

by Bianca Setionago
May 30, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Don't miss out! Follow PsyPost on Bluesky!

A new study published in the Journal of Cross-Cultural Psychology suggests that cultural background influences how people feel about socially interacting with artificial intelligence. Researchers found that East Asian participants expected to enjoy conversations with chatbots more than their Western counterparts, highlighting key differences in attitudes toward AI companionship.

As AI-powered chatbots like ChatGPT and Xiaoice become increasingly embedded in daily life, questions arise about how people perceive them. While some rely on these chatbots for productivity, others engage with them for emotional support. In East Asia, social chatbots have millions of users and even assist with caregiving, but cultural attitudes toward AI companionship remain a subject of debate.

Previous studies have presented mixed findings, with some suggesting Westerners embrace robots more, while others found East Asians to be more accepting. To investigate further, a research team led by Dunigan P. Folk from the University of British Columbia in Canada conducted two studies comparing perspectives between Western and East Asian groups.

In total, 1,659 participants were recruited. The first study surveyed 675 university students of East Asian and European descent in Canada. The second study expanded the research to include 984 Chinese and Japanese adults living in China, Japan, and the United States.

Across both studies, participants reported how much they would expect to enjoy a hypothetical conversation with a chatbot. They also responded to questions about how they would feel if someone else formed a social connection with a chatbot. Anthropomorphism—the tendency to attribute human characteristics to non-human entities—and exposure to advanced technology were also measured.

The results showed a clear pattern: participants from East Asian backgrounds were more open to forming emotional connections with chatbots compared to those from European backgrounds.

In Study 1, East Asian students who were not born in Canada expected to enjoy a conversation with AI more than East Asian students who were Canadian-born. East Asian students were also less uncomfortable with the idea of others bonding with AI compared to students of European heritage. In Study 2, Chinese and Japanese adults displayed more positive attitudes toward human–chatbot conversations than American adults.

Folk and colleagues highlight anthropomorphism as a key factor behind these cultural differences. East Asian participants were more likely to perceive AI as possessing human-like qualities, which may make interactions with chatbots feel more natural.

“Chinese [participants] scored higher than both American and Japanese participants in measures of anthropomorphism of technology… Japanese, too, exhibited significantly higher levels of anthropomorphism than Americans. Such differences are consistent with the idea that cultures rooted in historically animistic Eastern religions are more likely to humanize robots in the present day,” Folk and colleagues explained.

Beyond cultural philosophy, the researchers also examined exposure to technology. The study noted that repeated exposure to AI-powered social chatbots in East Asia could contribute to greater familiarity and acceptance, although exposure alone did not fully account for the differences in attitudes.

Despite these findings, the study had limitations. Participants responded to hypothetical scenarios rather than engaging with real chatbots, so actual behaviors may differ. Additionally, the research focused on a limited set of cultural groups, which may restrict the generalizability of the results.

The study, “Cultural Variation in Attitudes Toward Social Chatbots,” was authored by Dunigan P. Folk, Chenxi Wu, and Steven J. Heine.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails
AI can predict intimate partner femicide from variables extracted from legal documents
Artificial Intelligence

Being honest about using AI can backfire on your credibility

May 29, 2025

New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.

Read moreDetails
Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

May 27, 2025

A new study suggests that conscientious students are less likely to use generative AI tools like ChatGPT and that this may work in their favor. Frequent AI users reported lower grades, weaker academic confidence, and greater feelings of helplessness.

Read moreDetails
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

Groundbreaking AI model uncovers hidden patterns of political bias in online news

May 23, 2025

Researchers developed a large-scale system that detects political bias in web-based news outlets by examining topic selection, tone, and coverage patterns. The AI tool offers transparency and accuracy—even outperforming large language models.

Read moreDetails
Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds
Artificial Intelligence

Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds

May 21, 2025

A new study published in Acta Psychologica reveals that people’s judgments about whether a face is real or AI-generated are influenced by facial attractiveness and personality traits such as narcissism and honesty-humility—even when all the images are of real people.

Read moreDetails
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often misrepresent scientific studies — and newer models may be worse

May 20, 2025

AI-driven summaries of scientific studies may be misleading the public. A new study found that most leading language models routinely produce overgeneralized conclusions, with newer versions performing worse than older ones—even when explicitly prompted to avoid inaccuracies.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Narcissists perceive inequity because they overestimate their contributions, study suggests

Fear predicts authoritarian attitudes across cultures, with conservatives most affected

Premenstrual dysphoric disorder harms relationships for both sufferers and their partners – new study

Fears about AI push workers to embrace creativity over coding, new research suggests

Flipping two atoms in LSD turned it into a powerful treatment for damaged brain circuits

Seeing struggle as growth linked to higher self-esteem and life satisfaction

New research links certain types of narcissism to anti-immigrant attitudes

Common sleep aid blocks brain inflammation and tau buildup in Alzheimer’s model

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy