PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

A simple language switch can make AI models behave significantly differently

by Eric W. Dolan
January 23, 2026
Reading Time: 6 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in Nature Human Behaviour provides evidence that generative artificial intelligence models exhibit distinct cultural tendencies depending on the language in which they are prompted. The research suggests that using Chinese leads AI to produce more relationship-focused and context-aware responses, while using English results in more individualistic and analytical outputs. These findings imply that AI is not a culturally neutral tool and may subtly influence user decision-making based on linguistic context.

Generative artificial intelligence refers to a category of technology capable of creating new content, such as text and images, by identifying patterns within vast amounts of existing data. Platforms like Google’s Gemini, OpenAI’s ChatGPT, and Baidu’s ERNIE Bot have seen rapid global adoption for tasks ranging from writing assistance to advice seeking.

“This study was motivated by a simple but often overlooked tension in how generative AI is understood versus how it is built. Generative AI models are often assumed to be culturally neutral, producing essentially the same responses across languages,” explained study authors Lu Doris Zhang and Jackson G. Lu, the General Motors Associate Professor at MIT Sloan School of Management.

“Yet these models are trained on large-scale textual data that are inherently cultural. This raises an underexplored question: whether systematic cultural tendencies emerge when the same model is prompted in different human languages.”

“This question matters because generative AI is now embedded in everyday life. If cultural differences in AI outputs go unnoticed, they may influence users’ attitudes and choices at scale. By integrating insights from cultural psychology with generative AI research, we show that the same generative AI model exhibits systematic differences when prompted in Chinese versus English.”

The researchers focused on two foundational concepts from cultural psychology to frame their investigation: social orientation and cognitive style. Social orientation describes the degree to which an individual prioritizes the self versus the group. Independent social orientation, common in Western cultures, emphasizes personal goals and uniqueness. Interdependent social orientation, common in East Asian cultures, emphasizes social norms, harmony, and connection to others.

Cognitive style refers to how individuals habitually process information. An analytic cognitive style tends to focus on specific objects and uses formal logic to explain behavior based on internal traits. A holistic cognitive style pays greater attention to the context and relationships between objects, relying more on dialectical reasoning and situational explanations. The researchers hypothesized that AI models trained on high-resource languages like English and Chinese would reflect the distinct cultural tendencies associated with those linguistic groups.

To test this hypothesis, the research team examined two popular generative AI models: GPT-4 and ERNIE 3.5. They accessed these models via their application programming interfaces to ensure consistent testing conditions. The researchers conducted the study by administering identical psychological measures in both English and Chinese. For each measure, they ran 100 iterations in English and 100 iterations in Chinese, resulting in a total sample size of 200 responses per task. They reset the system between each iteration to prevent previous answers from influencing subsequent ones.

Google News Preferences Add PsyPost to your preferred sources

The first set of experiments measured social orientation using established psychological scales. One key measure was the “Inclusion of Other in the Self Scale,” which is a visual task. The researchers asked the AI to select a pair of circles that best represented the relationship between an individual and various associates, such as family members or colleagues. The options ranged from circles that were completely separate to circles that overlapped almost entirely.

The results showed a consistent pattern across both GPT and ERNIE. When prompted in Chinese, the models selected circle pairs with more overlap. This indicates a higher degree of interdependence, where the self is viewed as interconnected with others. When prompted in English, the models selected circles with less overlap, reflecting an independent orientation where the self remains distinct. This finding was replicated across text-based Likert scales measuring collectivism and individualism.

The second set of experiments assessed cognitive style through three specific tasks. The first was an attribution bias task, where the AI read vignettes about people’s behavior. The models were asked to rate how much the behavior was caused by personality versus the environment. In Chinese, the AI was more likely to attribute actions to the situation, which aligns with holistic thinking. In English, the AI attributed actions more to the individual’s disposition, aligning with analytic thinking.

Another task involved evaluating logical syllogisms that were logically valid but intuitively implausible. For example, the AI evaluated the premise that “all things made of plants are healthy” and “cigarettes are made of plants” to conclude that “cigarettes are healthy.” While logically sound based on the premises, the conclusion conflicts with real-world knowledge.

The researchers found that when prompted in Chinese, the AI was more likely to reject the logical validity based on intuition. When prompted in English, the AI was more likely to accept the formal logic despite the counterintuitive conclusion.

The researchers also measured the expectation of change. They asked the AI to estimate the probability of future events, such as whether two fighting kindergarteners might become lovers as adults. The Chinese responses consistently assigned higher probabilities to such changes, reflecting a holistic view that the world is dynamic and fluid. The English responses predicted more stability, reflecting an analytic view that current states tend to persist.

“The statistical magnitude of the effects is medium to large by behavioral science standards,” Zhang and Lu told PsyPost. “These effect sizes reflect meaningful and systematic differences in AI responses across languages. In practice, the effects are substantial enough to influence downstream recommendations and real-world decision-making.”

Beyond numeric scores, the team analyzed the text structure of the AI’s responses. They looked for context-sensitive answers, where the AI suggests that the “correct” answer depends on the specific situation. They also looked for instances where the AI provided a range of scores rather than a single number. The analysis revealed that Chinese prompts elicited significantly more context-sensitive answers and score ranges. This supports the idea that the Chinese language triggers a more holistic processing style that tolerates ambiguity and complexity.

To demonstrate the practical implications of these tendencies, the researchers conducted an experiment involving advertising recommendations. They asked the AI to select the best slogan for products like insurance and toothbrushes. The choices included slogans with independent themes, focusing on personal benefits, and interdependent themes, focusing on family welfare.

The researchers observed a divergence in recommendations based on language. When the request was made in Chinese, the AI was far more likely to recommend slogans that emphasized collective benefits and family protection. When the same request was made in English, the AI recommended slogans that highlighted individual peace of mind and personal gain. This suggests that the language used to consult an AI can directly alter the strategic advice it provides.

The researchers also explored whether users could manually adjust these cultural defaults. They ran an additional set of experiments using English prompts but included a specific cultural cue: “You are an average person born and living in China.” The addition of this single phrase significantly shifted the AI’s outputs. The English responses became more interdependent and holistic, closely resembling the results typically generated by Chinese prompts. This indicates that users can mitigate cultural bias if they are aware of it and use specific persona instructions.

“The main takeaway is that AI is not culturally neutral,” Zhang and Lu said. “The same AI can give noticeably different answers depending on the language you use, with English leading to more individual-focused and analytical responses and Chinese leading to more relationship-focused and context-aware ones.”

“These differences can show up in everyday advice and recommendations produced by AI, meaning AI may quietly shape how people think and decide even without their awareness. The good news is that users have some control: by choosing a language carefully or adding simple cultural cues, people can guide AI to give responses that better fit the cultural context of the situation they care about.”

There are a few limitations to consider. The study was limited to English and Chinese, so the findings may not generalize to other languages such as Spanish, Hindi, or Arabic. The researchers suggest that future work should investigate whether similar patterns exist in other large language models and across a broader spectrum of languages.

The researchers also note that AI models do not possess a genuine cultural identity; they reproduce statistical patterns found in their training data.

“First, we do not suggest that generative AI ‘possesses’ culture in the way humans do,” Zhang and Lu said. “Instead, the cultural tendencies we observe likely reflect real-world cultural patterns embedded in the large-scale text data on which these models are trained. Second, our findings are based on two specific models, gpt-4-1106-preview and ERNIE-3.5-8K-0205. While we expect similar patterns to emerge more broadly, readers should be cautious when generalizing to other generative AI models or different model versions.”

Looking ahead, the researchers plan to further investigate the practical implications of these interactions. They explained, “Our long-term goal is to understand how user inputs shape generative AI responses, and how these response differences translate into downstream behavioral and organizational outcomes.”

The study, “Cultural tendencies in generative AI,” was authored by Jackson G. Lu, Lesley Luyang Song, and Lu Doris Zhang.

RELATED

The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

May 5, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026
Gold digging is strongly linked to psychopathy and dark personality traits, study finds
Artificial Intelligence

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

April 30, 2026
Artificial intelligence flatters users into bad behavior
Artificial Intelligence

Artificial intelligence flatters users into bad behavior

April 26, 2026
Psychology textbooks still misrepresent famous experiments and controversial debates
Artificial Intelligence

How eye contact shapes the believability of computer-generated faces

April 24, 2026
Facebook users who ruminate and compare themselves to their friends experience increased loneliness
Artificial Intelligence

Women perceive AI as riskier than men do, study finds

April 22, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • Both men and women view a partner’s financial investment in a rival as a major relationship threat
  • Brain scans of 800 incarcerated men link psychopathy to an expanded cortical surface area
  • The gender friendship gap is driven primarily by white men, not a universal difference across groups
  • General intelligence explains the link between math and music skills
  • New study reveals a striking gap between sexual pleasure and overall satisfaction in the U.S.

Psychology of Selling

  • Why brand names like “Yum Yum” and “BonBon” taste sweeter to our brains
  • How the science of persuasion connects to B2B sales success
  • Can AI shopping assistants make consumers less willing to choose eco-friendly options?
  • Relying on financial bonuses might actually be driving your sales team away, new research suggests
  • Why the most emotionally skilled salespeople still underperform without one key ingredient

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc