Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT is shifting rightwards politically

by Vladimir Hedrih
March 28, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Follow PsyPost on Google News

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.

Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.

Study author Yifei Liu and her colleagues aimed to explore whether—and how—the ideological stance of ChatGPT-3.5 and GPT-4 has changed over time. ChatGPT is one of the most popular and widely used LLMs, and the authors hypothesized that later versions might display a significant ideological shift compared to earlier ones.

To evaluate ChatGPT’s political orientation, the researchers used the Political Compass Test, a tool that maps political beliefs along two axes: economic (left–right) and social (authoritarian–libertarian). The study collected 3,000 responses from each GPT model included in the analysis.

The tests were conducted in developer mode and were designed to prevent earlier responses from influencing later ones. The sensitivity of the model was kept at the default setting to ensure the randomness of responses matched what regular users would experience. Prompts were submitted from three different accounts to account for possible variations in how the model responds to different users.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

“While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”

The study sheds light on the current tendencies in ChatGPT responses. However, it is important to note that LLMs have no value systems of their own. Their responses depend on the selection of materials they are trained on and on instructions received by their developers. As these change, so will the answers provided by these systems.

The paper, ““Turning right”? An experimental study on the political value shift in large language models,” was authored by Yifei Liu, Yuang Panwang, and Chao Gu.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

May 30, 2025

A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.

Read moreDetails
AI can predict intimate partner femicide from variables extracted from legal documents
Artificial Intelligence

Being honest about using AI can backfire on your credibility

May 29, 2025

New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.

Read moreDetails
Too much ChatGPT? Study ties AI reliance to lower grades and motivation
Artificial Intelligence

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

May 27, 2025

A new study suggests that conscientious students are less likely to use generative AI tools like ChatGPT and that this may work in their favor. Frequent AI users reported lower grades, weaker academic confidence, and greater feelings of helplessness.

Read moreDetails
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

Groundbreaking AI model uncovers hidden patterns of political bias in online news

May 23, 2025

Researchers developed a large-scale system that detects political bias in web-based news outlets by examining topic selection, tone, and coverage patterns. The AI tool offers transparency and accuracy—even outperforming large language models.

Read moreDetails
Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds
Artificial Intelligence

Attractiveness shapes beliefs about whether faces are real or AI-generated, study finds

May 21, 2025

A new study published in Acta Psychologica reveals that people’s judgments about whether a face is real or AI-generated are influenced by facial attractiveness and personality traits such as narcissism and honesty-humility—even when all the images are of real people.

Read moreDetails
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often misrepresent scientific studies — and newer models may be worse

May 20, 2025

AI-driven summaries of scientific studies may be misleading the public. A new study found that most leading language models routinely produce overgeneralized conclusions, with newer versions performing worse than older ones—even when explicitly prompted to avoid inaccuracies.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Artificial confidence? People feel more creative after viewing AI-labeled content

May 16, 2025

A new study suggests that when people see creative work labeled as AI-generated rather than human-made, they feel more confident in their own abilities. The effect appears across jokes, drawings, poems, and more—and might stem from subtle social comparison processes.

Read moreDetails
AI-driven brain training reduces impulsiveness in kids with ADHD, study finds
ADHD

AI-driven brain training reduces impulsiveness in kids with ADHD, study finds

May 9, 2025

Researchers found that a personalized, game-based cognitive therapy powered by artificial intelligence significantly reduced impulsiveness and inattentiveness in children with ADHD. Brain scans showed signs of neurological improvement, highlighting the potential of AI tools in mental health treatment.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Here’s what the data says about who actually benefits from DEI

Adults with ADHD face long-term social and economic challenges, study finds — even with medication

Sleep deprivation reduces attention and cognitive processing capacity

Neuroscientists find individual differences in memory response to amygdala stimulation

Mindfulness boosts generosity only for group-oriented individuals

New attractiveness research reveals surprising preference for femininity in men’s faces

Consciousness remains a mystery after major theory showdown

Sheriff partisanship doesn’t appear to shape extremist violence in the United States

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy