Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT is shifting rightwards politically

by Vladimir Hedrih
March 28, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.

Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.

Study author Yifei Liu and her colleagues aimed to explore whether—and how—the ideological stance of ChatGPT-3.5 and GPT-4 has changed over time. ChatGPT is one of the most popular and widely used LLMs, and the authors hypothesized that later versions might display a significant ideological shift compared to earlier ones.

To evaluate ChatGPT’s political orientation, the researchers used the Political Compass Test, a tool that maps political beliefs along two axes: economic (left–right) and social (authoritarian–libertarian). The study collected 3,000 responses from each GPT model included in the analysis.

The tests were conducted in developer mode and were designed to prevent earlier responses from influencing later ones. The sensitivity of the model was kept at the default setting to ensure the randomness of responses matched what regular users would experience. Prompts were submitted from three different accounts to account for possible variations in how the model responds to different users.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

“While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”

The study sheds light on the current tendencies in ChatGPT responses. However, it is important to note that LLMs have no value systems of their own. Their responses depend on the selection of materials they are trained on and on instructions received by their developers. As these change, so will the answers provided by these systems.

The paper, ““Turning right”? An experimental study on the political value shift in large language models,” was authored by Yifei Liu, Yuang Panwang, and Chao Gu.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

May 30, 2025

A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.

Read moreDetails
AI can predict intimate partner femicide from variables extracted from legal documents
Artificial Intelligence

Being honest about using AI can backfire on your credibility

May 29, 2025

New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.

Read moreDetails
Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

May 27, 2025

A new study suggests that conscientious students are less likely to use generative AI tools like ChatGPT and that this may work in their favor. Frequent AI users reported lower grades, weaker academic confidence, and greater feelings of helplessness.

Read moreDetails
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

Groundbreaking AI model uncovers hidden patterns of political bias in online news

May 23, 2025

Researchers developed a large-scale system that detects political bias in web-based news outlets by examining topic selection, tone, and coverage patterns. The AI tool offers transparency and accuracy—even outperforming large language models.

Read moreDetails
Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds
Artificial Intelligence

Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds

May 21, 2025

A new study published in Acta Psychologica reveals that people’s judgments about whether a face is real or AI-generated are influenced by facial attractiveness and personality traits such as narcissism and honesty-humility—even when all the images are of real people.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Neuroscientists discover biological mechanism that helps the brain ignore irrelevant information

Problematic porn use remains stable over time and is strongly linked to mental distress, study finds

Christian nationalists tend to imagine God as benevolent, angry over sins, and engaged

Psilocybin induces large-scale brain network reorganization, offering insights into the psychedelic state

Scientists map how alcohol changes bodily sensations

Poor sleep may shrink brain regions vulnerable to Alzheimer’s disease, study suggests

Narcissists perceive inequity because they overestimate their contributions, study suggests

Fear predicts authoritarian attitudes across cultures, with conservatives most affected

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy