Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Large language models tend to express left-of-center political viewpoints

by Vladimir Hedrih
September 25, 2024
in Artificial Intelligence, Political Psychology
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An analysis of 24 conversational large language models (LLMs) has revealed that many of these AI tools tend to generate responses to politically charged questions that reflect left-of-center political viewpoints. However, this tendency was not observed in all models, and foundational models without specialized fine-tuning often did not show a coherent pattern of political preferences the way humans do. The paper was published in PLOS ONE.

Large language models are advanced artificial intelligence systems designed to interpret and generate human-like text. They are built using deep learning techniques, particularly neural networks, and are trained on vast amounts of textual data from sources such as websites, books, and social media. These models learn the patterns, structures, and relationships within language, which enables them to perform tasks like translation, summarization, answering questions, and even creative writing.

Since the release of OpenAI’s GPT-2 in 2019, many new LLMs have been developed, quickly gaining popularity as they were adopted by millions of users worldwide. These AI systems are now used for a variety of tasks, from answering technical questions to providing opinions on social and political matters. Given this widespread usage, many researchers have expressed concerns about the potential of LLMs to shape users’ perceptions, especially in areas such as political views, which could have broad societal implications.

This inspired David Rozado to investigate the political preferences embedded in the responses generated by LLMs. He aimed to understand whether these models, which are trained on vast datasets and then fine-tuned to interact with humans, reflect any particular political bias. To this end, Rozado administered 11 different political orientation tests to 24 conversational LLMs. The models he studied included LLMs that underwent supervised fine-tuning after their pre-training, as well as some that received additional reinforcement learning through human or artificial feedback.

The political orientation tests used in the study were designed to gauge various political beliefs and attitudes. These included well-known instruments like the Political Compass Test, the Political Spectrum Quiz, the World’s Smallest Political Quiz, and the Political Typology Quiz, among others. These tests aim to map an individual (or, in this case, a model) onto a political spectrum, often based on economic and social dimensions.

The study included a mix of closed-source and open-source models, such as OpenAI’s GPT-3.5, GPT-4, Google’s Gemini, Anthropic’s Claude, Twitter’s Grok, and open-source models from the Llama 2 and Mistral series, as well as Alibaba’s Qwen.

Each test was administered 10 times per model, ensuring consistent results and minimizing any anomalies in responses. The final sample included a diverse range of models, reflecting various approaches to LLM development. In total, 2,640 individual test instances were analyzed.

The results showed a notable trend: most conversational LLMs tended to provide responses that skewed left-of-center. Left-of-center views generally emphasize social equality, government intervention in economic matters to address inequality, and progressive policies on issues such as healthcare, education, and labor rights, while still supporting a market-based economy. This left-leaning tendency was consistent across multiple political tests, although there was some variation in how strongly each model exhibited this bias.

Interestingly, this left-leaning bias was not evident in the base models upon which the conversational models were built. These base models, which had only undergone the initial phase of pre-training on a large corpus of internet text, often produced politically neutral or incoherent responses. These foundational models struggled to interpret the political questions accurately without additional fine-tuning, showing that the ability to produce coherent political responses is more likely a product of fine-tuning rather than pre-training alone.

Rozado also demonstrated that it is relatively straightforward to steer the political orientation of an LLM through supervised fine-tuning. By using modest amounts of politically aligned data during the fine-tuning process, he was able to shift a model’s political responses toward specific points on the political spectrum. For instance, with targeted fine-tuning, Rozado created politically aligned models like “LeftWingGPT” and “RightWingGPT,” which consistently produced left-leaning and right-leaning responses, respectively. This highlights the significant role that fine-tuning can play in shaping the political viewpoints expressed by LLMs.

“The emergence of large language models (LLMs) as primary information providers marks a significant transformation in how individuals access and engage with information,” Rozado concluded. “Traditionally, people have relied on search engines or platforms like Wikipedia for quick and reliable access to a mix of factual and biased information.”

“However, as LLMs become more advanced and accessible, they are starting to partially displace these conventional sources. This shift in information sourcing has profound societal implications, as LLMs can shape public opinion, influence voting behaviors, and impact the overall discourse in society. Therefore, it is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries.”

The study sheds light on the political preferences embedded in current versions of popular LLMs. However, it should be noted that views expressed by LLMs are a manifestation of training they underwent and the data they were trained on. LLMs trained in a different way and on different data could manifest very different political preferences.

The paper, “The political preferences of LLMs,” was authored by David Rozado.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
New study suggests Donald Trump’s “fake news” attacks are backfiring
Political Psychology

Scientists are uncovering more and more unsettling facts about our politics

July 5, 2025

Why has politics become so personal? The answers may lie in our minds. These 13 studies from the new science of political behavior reveal the hidden psychological forces—from personality to primal fear—that are driving us further apart.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
These common sounds can impair your learning, according to new psychology research
Political Psychology

Despite political tensions, belief in an impending U.S. civil war remains low

July 4, 2025

A new national survey finds that only a small fraction of Americans believe civil war is likely or necessary.

Read moreDetails
Racial and religious differences help explain why unmarried voters lean Democrat
Political Psychology

Student loan debt doesn’t deter civic engagement — it may actually drive it, new research suggests

July 3, 2025

Americans with student loan debt are more likely to vote and engage in political activities than those without debt, likely because they see government as responsible and capable of addressing their financial burden through policy change.

Read moreDetails
Scientists just uncovered a surprising illusion in how we remember time
Mental Health

New research suggests the conservative mental health advantage is a myth

July 3, 2025

Do conservatives really have better mental well-being than liberals? A new study suggests the answer depends entirely on how you ask. The well-known ideological gap disappears when "mental health" is replaced with the less-stigmatized phrase "overall mood."

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
New psychology study sheds light on mysterious “feelings of presence” during isolation
Political Psychology

People who think “everyone agrees with me” are more likely to support populism

July 1, 2025

People who wrongly believe that most others share their political views are more likely to support populist ideas, according to a new study. These false beliefs can erode trust in democratic institutions and fuel resentment toward political elites.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

How to protect your mental health from a passive-aggressive narcissist

Dark personality traits linked to generative AI use among art students

Scientists are uncovering more and more unsettling facts about our politics

People with depression face significantly greater social and health-related challenges

Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds

New research reveals hidden biases in AI’s moral advice

7 subtle signs you are being love bombed—and how to slow things down before you get hurt

A simple breathing exercise enhances emotional control, new research suggests

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy