Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Large language models tend to express left-of-center political viewpoints

by Vladimir Hedrih
September 25, 2024
in Artificial Intelligence, Political Psychology
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An analysis of 24 conversational large language models (LLMs) has revealed that many of these AI tools tend to generate responses to politically charged questions that reflect left-of-center political viewpoints. However, this tendency was not observed in all models, and foundational models without specialized fine-tuning often did not show a coherent pattern of political preferences the way humans do. The paper was published in PLOS ONE.

Large language models are advanced artificial intelligence systems designed to interpret and generate human-like text. They are built using deep learning techniques, particularly neural networks, and are trained on vast amounts of textual data from sources such as websites, books, and social media. These models learn the patterns, structures, and relationships within language, which enables them to perform tasks like translation, summarization, answering questions, and even creative writing.

Since the release of OpenAI’s GPT-2 in 2019, many new LLMs have been developed, quickly gaining popularity as they were adopted by millions of users worldwide. These AI systems are now used for a variety of tasks, from answering technical questions to providing opinions on social and political matters. Given this widespread usage, many researchers have expressed concerns about the potential of LLMs to shape users’ perceptions, especially in areas such as political views, which could have broad societal implications.

This inspired David Rozado to investigate the political preferences embedded in the responses generated by LLMs. He aimed to understand whether these models, which are trained on vast datasets and then fine-tuned to interact with humans, reflect any particular political bias. To this end, Rozado administered 11 different political orientation tests to 24 conversational LLMs. The models he studied included LLMs that underwent supervised fine-tuning after their pre-training, as well as some that received additional reinforcement learning through human or artificial feedback.

The political orientation tests used in the study were designed to gauge various political beliefs and attitudes. These included well-known instruments like the Political Compass Test, the Political Spectrum Quiz, the World’s Smallest Political Quiz, and the Political Typology Quiz, among others. These tests aim to map an individual (or, in this case, a model) onto a political spectrum, often based on economic and social dimensions.

The study included a mix of closed-source and open-source models, such as OpenAI’s GPT-3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, Twitter’s Grok, and open-source models from the Llama 2 and Mistral series, as well as Alibaba’s Qwen.

Each test was administered 10 times per model, ensuring consistent results and minimizing any anomalies in responses. The final sample included a diverse range of models, reflecting various approaches to LLM development. In total, 2,640 individual test instances were analyzed.

The results showed a notable trend: most conversational LLMs tended to provide responses that skewed left-of-center. Left-of-center views generally emphasize social equality, government intervention in economic matters to address inequality, and progressive policies on issues such as healthcare, education, and labor rights, while still supporting a market-based economy. This left-leaning tendency was consistent across multiple political tests, although there was some variation in how strongly each model exhibited this bias.

Interestingly, this left-leaning bias was not evident in the base models upon which the conversational models were built. These base models, which had only undergone the initial phase of pre-training on a large corpus of internet text, often produced politically neutral or incoherent responses. These foundational models struggled to interpret the political questions accurately without additional fine-tuning, showing that the ability to produce coherent political responses is more likely a product of fine-tuning rather than pre-training alone.

Rozado also demonstrated that it is relatively straightforward to steer the political orientation of an LLM through supervised fine-tuning. By using modest amounts of politically aligned data during the fine-tuning process, he was able to shift a model’s political responses toward specific points on the political spectrum. For instance, with targeted fine-tuning, Rozado created politically aligned models like “LeftWingGPT” and “RightWingGPT,” which consistently produced left-leaning and right-leaning responses, respectively. This highlights the significant role that fine-tuning can play in shaping the political viewpoints expressed by LLMs.

“The emergence of large language models (LLMs) as primary information providers marks a significant transformation in how individuals access and engage with information,” Rozado concluded. “Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information.”

“However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources. This shift in information sourcing has profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

The study sheds light on the political preferences embedded in current versions of popular LLMs. However, it should be noted that views expressed by LLMs are a manifestation of training they underwent and the data they were trained on. LLMs trained in a different way and on different data could manifest very different political preferences.

The paper, “The political preferences of LLMs,” was authored by David Rozado.

RELATED

Scientists discover unique neuron density patterns in children with autism
Artificial Intelligence

The secret to sustainable AI may have been in our brains all along

October 31, 2025
A newsroom’s political makeup affects public trust, study finds
Political Psychology

A newsroom’s political makeup affects public trust, study finds

October 30, 2025
Young children are more likely to trust information from robots over humans
Artificial Intelligence

New study shows that a robot’s feedback can shape human relationships

October 30, 2025
Intention to purchase a firearm linked to heightened psychiatric symptoms
Political Psychology

For young Republicans and men, fear of mass shootings fuels opposition to gun control

October 29, 2025
What scientists found when they analyzed 187 of Donald Trump’s shrugs
Donald Trump

What scientists found when they analyzed 187 of Donald Trump’s shrugs

October 28, 2025
Married people have fewer depressive symptoms than unmarried people, large international study finds
Political Psychology

Long-term study shows romantic partners mutually shape political party support

October 27, 2025
New study identifies another key difference between religious “nones” and religious “dones”
Political Psychology

Study finds a shift toward liberal politics after leaving religion

October 27, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often violate ethical standards in mental health contexts

October 26, 2025

STAY CONNECTED

LATEST

Scientists question caffeine’s power to shield the brain from junk food

New $2 saliva test may aid in psychiatric diagnosis

The secret to sustainable AI may have been in our brains all along

Vulnerability to stress magnifies how a racing mind disrupts sleep

A severed brain reveals an astonishing power to reroute communication

Public Montessori preschool yields improved reading and cognition at a lower cost

Familial link between ADHD and crime risk is partly genetic, study suggests

A newsroom’s political makeup affects public trust, study finds

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy