Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT is shifting rightwards politically

by Vladimir Hedrih
March 28, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.

Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.

Study author Yifei Liu and her colleagues aimed to explore whether—and how—the ideological stance of ChatGPT-3.5 and GPT-4 has changed over time. ChatGPT is one of the most popular and widely used LLMs, and the authors hypothesized that later versions might display a significant ideological shift compared to earlier ones.

To evaluate ChatGPT’s political orientation, the researchers used the Political Compass Test, a tool that maps political beliefs along two axes: economic (left–right) and social (authoritarian–libertarian). The study collected 3,000 responses from each GPT model included in the analysis.

The tests were conducted in developer mode and were designed to prevent earlier responses from influencing later ones. The sensitivity of the model was kept at the default setting to ensure the randomness of responses matched what regular users would experience. Prompts were submitted from three different accounts to account for possible variations in how the model responds to different users.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

Google News Preferences Add PsyPost to your preferred sources

“While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”

The study sheds light on the current tendencies in ChatGPT responses. However, it is important to note that LLMs have no value systems of their own. Their responses depend on the selection of materials they are trained on and on instructions received by their developers. As these change, so will the answers provided by these systems.

The paper, ““Turning right”? An experimental study on the political value shift in large language models,” was authored by Yifei Liu, Yuang Panwang, and Chao Gu.

Previous Post

Reduced male hormone exposure may be linked to autism-like traits in males, study suggests

Next Post

Science fiction may help foster a sense of global solidarity by evoking awe, study finds

RELATED

Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

The bystander effect applies to virtual agents, new psychology research shows

March 12, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Therapists test an AI dating simulator to help chronically single men practice romantic skills

March 9, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage
Artificial Intelligence

Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage

February 28, 2026
People with social anxiety more likely to become overdependent on conversational artificial intelligence agents
Artificial Intelligence

AI therapy is rated higher for empathy until people learn a machine wrote the text

February 26, 2026
New research: AI models tend to reflect the political ideologies of their creators
Artificial Intelligence

New research: AI models tend to reflect the political ideologies of their creators

February 26, 2026
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

AI and mental health: New research links use of ChatGPT to worsened psychiatric symptoms

February 24, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

How personality and culture relate to our perceptions of artificial intelligence

February 23, 2026

STAY CONNECTED

LATEST

Women who are open to “sugar arrangements” tend to show deeper psychological vulnerabilities

Ashwagandha shows promise as a treatment for depression in new rat study

Early exposure to a high-fat diet alters how the adult brain reacts to junk food

How sexual orientation stereotypes keep men out of early childhood education

Your personality and upbringing predict if you will lean toward science or faith

Veterans are no more likely than the general public to support political violence

People with social anxiety are less likely to experience a post-sex emotional glow

The extreme male brain theory of autism applies more strongly to females

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc