Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT is shifting rightwards politically

by Vladimir Hedrih
March 28, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Follow PsyPost on Google News

An examination of a large number of ChatGPT responses found that the model consistently exhibits values aligned with the libertarian-left segment of the political spectrum. However, newer versions of ChatGPT show a noticeable shift toward the political right. The paper was published in Humanities & Social Sciences Communications.

Large language models (LLMs) are artificial intelligence systems trained to understand and generate human language. They learn from massive datasets that include books, articles, websites, and other text sources. By identifying patterns in these data, LLMs can answer questions, write essays, translate languages, and more. Although they don’t think or understand like humans, they predict the most likely words based on context.

Often, the responses generated by LLMs reflect certain political views. While LLMs do not possess personal political beliefs, their outputs can mirror patterns found in the data they were trained on. Since much of that data originates from the internet, news media, books, and social media, it can contain political biases. As a result, an LLM’s answers may lean liberal or conservative depending on the topic. This doesn’t mean the model “believes” anything—it simply predicts words based on previous patterns. Additionally, the way a question is phrased can influence how politically slanted the answer appears.

Study author Yifei Liu and her colleagues aimed to explore whether—and how—the ideological stance of ChatGPT-3.5 and GPT-4 has changed over time. ChatGPT is one of the most popular and widely used LLMs, and the authors hypothesized that later versions might display a significant ideological shift compared to earlier ones.

To evaluate ChatGPT’s political orientation, the researchers used the Political Compass Test, a tool that maps political beliefs along two axes: economic (left–right) and social (authoritarian–libertarian). The study collected 3,000 responses from each GPT model included in the analysis.

The tests were conducted in developer mode and were designed to prevent earlier responses from influencing later ones. The sensitivity of the model was kept at the default setting to ensure the randomness of responses matched what regular users would experience. Prompts were submitted from three different accounts to account for possible variations in how the model responds to different users.

The results showed that ChatGPT consistently aligned with values in the libertarian-left quadrant. However, newer versions of the model exhibited a clear shift toward the political right. Libertarian-left values typically emphasize individual freedom, social equality, and voluntary cooperation, while opposing both authoritarian control and economic exploitation. In contrast, economic-right values prioritize free market capitalism, property rights, and minimal government intervention in the economy.

“This shift is particularly noteworthy given the widespread use of LLMs and their potential influence on societal values. Importantly, our study controlled for factors such as user interaction and language, and the observed shifts were not directly linked to changes in training datasets,” the study authors concluded.

“While this research provides valuable insights into the dynamic nature of value alignment in AI, it also underscores limitations, including the challenge of isolating all external variables that may contribute to these shifts. These findings suggest a need for continuous monitoring of AI systems to ensure ethical value alignment, particularly as they increasingly integrate into human decision-making and knowledge systems.”

The study sheds light on the current tendencies in ChatGPT responses. However, it is important to note that LLMs have no value systems of their own. Their responses depend on the selection of materials they are trained on and on instructions received by their developers. As these change, so will the answers provided by these systems.

The paper, ““Turning right”? An experimental study on the political value shift in large language models,” was authored by Yifei Liu, Yuang Panwang, and Chao Gu.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Artificial confidence? People feel more creative after viewing AI-labeled content

May 16, 2025

A new study suggests that when people see creative work labeled as AI-generated rather than human-made, they feel more confident in their own abilities. The effect appears across jokes, drawings, poems, and more—and might stem from subtle social comparison processes.

Read moreDetails
AI-driven brain training reduces impulsiveness in kids with ADHD, study finds
ADHD

AI-driven brain training reduces impulsiveness in kids with ADHD, study finds

May 9, 2025

Researchers found that a personalized, game-based cognitive therapy powered by artificial intelligence significantly reduced impulsiveness and inattentiveness in children with ADHD. Brain scans showed signs of neurological improvement, highlighting the potential of AI tools in mental health treatment.

Read moreDetails
Neuroscientists use brain implants and AI to map language processing in real time
Artificial Intelligence

Neuroscientists use brain implants and AI to map language processing in real time

May 9, 2025

Researchers recorded brain activity during unscripted conversations and compared it to patterns in AI language models. The findings reveal a network of brain areas that track speech meaning and speaker transitions, offering a detailed picture of how we communicate.

Read moreDetails
Artificial intelligence: 7 eye-opening new scientific discoveries
Artificial Intelligence

Artificial intelligence: 7 eye-opening new scientific discoveries

May 8, 2025

As artificial intelligence becomes more advanced, researchers are uncovering both how these systems behave and how they influence human life. These seven recent studies offer insights into the psychology of AI—and what happens when humans and machines interact.

Read moreDetails
New study: AI can identify autism from tiny hand motion patterns
Artificial Intelligence

New study: AI can identify autism from tiny hand motion patterns

May 8, 2025

Hand movements during a basic grasping task can help identify autism, new research suggests. The study used motion tracking and machine learning to analyze finger movements and found that classification accuracy exceeded 84% using just two sensors.

Read moreDetails
Cognitive psychologist explains why AI images fool so many people
Artificial Intelligence

Cognitive psychologist explains why AI images fool so many people

May 7, 2025

Despite glaring errors, many AI-generated images go undetected by casual viewers. A cognitive psychologist explores how attention, perception, and mental shortcuts shape what we notice—and what we miss—while scrolling through our increasingly synthetic digital feeds.

Read moreDetails
Are AI lovers replacing real romantic partners? Surprising findings from new research
Artificial Intelligence

Are AI lovers replacing real romantic partners? Surprising findings from new research

May 4, 2025

Falling in love with a virtual character might change how people feel about real-life marriage. A recent study found that these digital romances can both dampen and strengthen marriage intentions, depending on the emotional and psychological effects involved.

Read moreDetails
The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Homemade political deepfakes can fool voters, but may not beat plain text misinformation

April 30, 2025

A new study finds that deepfakes made by an undergraduate student were able to sway political opinions and create false memories, but they weren't consistently more persuasive than written misinformation. The findings raise questions about the actual threat posed by amateur deepfakes in shaping public opinion.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Surprisingly widespread brain activity supports economic decision-making, new study finds

Scientists finds altered attention-related brain connectivity in youth with anxiety

From fixed pulses to smart stimulation: Parkinson’s treatment takes a leap forward

New research challenges idea that female breasts are sexualized due to modesty norms

Mother’s childhood trauma linked to emotional and behavioral issues in her children, study finds

New study sheds light on which post-psychedelic difficulties last longest and what helps people cope

Young adults who drink heavily report more romantic highs and lows

Amphetamine scrambles the brain’s sense of time by degrading prefrontal neuron coordination

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy