Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

by Karina Petrova
April 1, 2026
in Artificial Intelligence, Political Psychology
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Artificial intelligence programs can persuade people to temper their political views, but highly customized messages or deep conversations with bots do not seem to work any better than a single basic argument. These results challenge long-held academic theories about what makes political messaging effective, suggesting that targeted data and interactive debates might not provide the advantages that politicians expect. The findings were recently published in the Proceedings of the National Academy of Sciences.

Changing the minds of voters is an essential feature of democratic societies. Advocacy groups, public health officials, and political candidates spend vast amounts of money attempting to sway public opinion on polarizing topics. Despite decades of study, the exact psychological processes that dictate whether a person will change their mind remain difficult to pin down. Academic researchers often face practical limitations when studying how social communication works in the real world.

Two central concepts have dominated the academic understanding of targeted messaging. The first is message customization, which is also known in the political realm as microtargeting. This theory proposes that a message will be much more effective if it is explicitly tailored to the personal traits, values, or demographics of the person receiving it. The core idea is that persuaders should adapt their message to the audience rather than expecting the audience to adapt to the message.

The second concept is known as the elaboration likelihood model. This model suggests that people are more likely to experience durable attitude changes when they exert heavy cognitive effort. In other words, if a person has to actively think about a topic, ask questions, or defend their views in a conversation, they are more permanently swayed than if they simply read a static flyer.

Historically, it has been surprisingly difficult to isolate these two mechanisms in a laboratory setting. Human researchers or actors participating in experiments introduce unwanted variables into the interactions. A human confederate might change their tone of voice, display subtle facial expressions, or introduce social pressure that alters how the test subject forms an opinion.

Lisa P. Argyle, a political scientist at Brigham Young University, led a team of researchers hoping to solve this exact methodological problem. Argyle collaborated with Brigham Young University colleagues Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Olcott, Jackson Pond, and David Wingate. They theorized that generative artificial intelligence could act as a perfectly controlled debate partner for human test subjects.

By using large language models, the research team could generate text with a consistent tone and style for thousands of varying interactions. This allowed them to isolate the effects of customization and cognitive elaboration without the messy interference of human social dynamics. They wanted to know if highly tailored messages or interactive chats actually outperformed a single, well-written generic argument.

To answer this question, the research team designed two preregistered online survey experiments featuring nearly 3,700 adult participants in the United States. The researchers recruited a pool of respondents that roughly matched the national census averages for age, gender, and race. They also ensured an even balance of political ideologies, including equal numbers of Democrats and Republicans.

Google News Preferences Add PsyPost to your preferred sources

The first study focused on the contentious topic of immigration. Participants answered a series of questions about their support for increased border security spending and their opinions on sponsoring immigrant visas. The second study focused on the curriculum used in public educational settings. Specifically, this survey asked participants how much control parents should have over controversial social topics and whether teachers should bring personal political views into the classroom.

After establishing these baseline opinions, the researchers randomly assigned the participants to either a control group or to one of four experimental interventions. All of the experimental interventions used a large language model to try to persuade the participant to change their mind. The goal of the bot was always to argue against the participant’s original beliefs.

The first experimental group received a single generic message. The software was instructed to act as an expert and write the strongest possible paragraph arguing for the opposing political viewpoint. This text was not adapted to the specific person reading it.

The second group received a microtargeted message. In this scenario, the artificial intelligence was fed all the demographic data the participant had provided at the start of the survey. The bot used this background information to craft a highly personalized argument, testing the modern concept of customized political campaigning.

The third group engaged in a direct, interactive debate. Participants had to exchange six conversational turns with the artificial intelligence program. The bot was instructed to act as a psychology expert, providing counterarguments and asking follow-up questions to force the participant into deep cognitive engagement.

The final experimental group participated in an interactive motivational interview. Motivational interviewing is a psychological technique often used in therapy to help people find internal motivation to alter their own behavior. Instead of directly debating the participants, the bot asked reflective questions intended to help respondents convince themselves to adopt a new perspective.

To verify the integrity of the experiment, the researchers ran a secondary evaluation on the text generated by the bot. They used machine learning techniques to map out the core arguments contained in every single message. This confirmed that the fundamental facts and claims remained identical across all the groups, with only the presentation style changing.

The final results contradicted the expectations set by decades of academic literature. Across the board, encountering the opposing arguments did cause participants to moderate their views. On average, the respondents shifted their political attitudes by roughly 2.5 to 4 percentage points in the direction of the opposing argument.

The surprising takeaway was that the advanced techniques did not perform any better than the basic approach. The personalized messages and the interactive chats failed to produce more attitude change than the single generic message. In fact, the motivational interviewing technique was often the least effective method evaluated during the trials.

These numbers suggest that customization and cognitive elaboration might not be the powerful psychological levers that campaign strategists assume they are. If political microtargeting does provide an advantage, that advantage is extremely small. A simple, generally persuasive argument appears to be just as effective as a tailored digital debate.

The researchers tracked a secondary outcome called democratic reciprocity. This metric captures whether a person is willing to view their political opponents as reasonable people who are worthy of respect. The academic community has debated for years whether moderating a person’s issue-based opinions will automatically reduce their overarching prejudice against opposing groups.

The study provided a relatively clear answer to this secondary question. Even though many participants moderated their actual policy opinions, this shift rarely translated into increased respect for the other side. The ideological gap between the voters shrank, but their hostility toward opposing political groups remained identical.

The one exception occurred during the interactive chats regarding public school curriculums. In that specific setting, participants did show an increase in democratic reciprocity. The researchers suspect this happened because the bot explicitly argued for the necessity of social tolerance as part of its educational curriculum talking points.

The researchers note that these immediate findings should not be interpreted as the final word on political communication. The experiments only examined brief interactions occurring in an isolated digital environment. It is entirely possible that personalization and cognitive elaboration work much better over a period of months or years.

Additionally, personal connections between actual humans might rely on social pressures that artificial intelligence cannot easily mimic. A deeply reasoned argument coming from a close friend might trigger different psychological responses than a similar argument presented by an anonymous survey tool. Researchers hope to explore these boundaries in future investigations.

Ultimately, the project demonstrated that generative artificial intelligence can be a highly effective tool for studying the social sciences. Creating customized arguments for thousands of test subjects would require massive staffing and financial resources if attempted entirely by humans. The software allowed the academic team to evaluate influential theories at a scale that was previously impossible.

The study, “Testing theories of political persuasion using AI,” was authored by Lisa P. Argyle, Ethan C. Busby, Joshua R. Gubler, Alex Lyman, Justin Olcott, Jackson Pond, and David Wingate.

Previous Post

Scientists use brain measurements to identify a video that significantly lowers racial bias

Next Post

The neuroscience of hypocrisy points to a communication breakdown in the brain

RELATED

Collective narcissism, paranoia, and distrust in science predict climate change conspiracy beliefs
Conspiracy Theories

New study reveals how political bias conditions the impact of conspiracy thinking

April 19, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Political Psychology

New research finds a persistent and growing leftward tilt in the social sciences

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins
Political Psychology

Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins

April 16, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Cognitive dissonance helps explain why Trump supporters remain loyal, new research suggests
Donald Trump

Cognitive dissonance helps explain why Trump supporters remain loyal, new research suggests

April 11, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Childhood trauma and attachment styles show nuanced links to alternative sexual preferences

New study reveals how political bias conditions the impact of conspiracy thinking

Cognition might emerge from embodied “grip” with the world rather than abstract mental processes

Men and women show different relative cognitive strengths across their lifespans

Early exposure to forever chemicals linked to altered brain genes and impulsive behavior in rats

Soft brain implants outperform rigid silicon in long-term safety study

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

Can choking during sex cause brain damage? Emerging evidence points to hidden neurological risks

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc