Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

by Eric W. Dolan
April 2, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Artificial intelligence writing tools that predict and suggest our next words can do much more than simply speed up our typing. New research provides evidence that interacting with biased autocomplete suggestions can covertly shift a person’s underlying attitudes on important societal issues. The findings, published in the journal Science Advances, suggest that the subtle influence of these everyday programs often bypasses our conscious awareness.

Artificial intelligence programs powered by large language models are increasingly woven into human communication. These technologies power the autocomplete features found in popular email clients, messaging applications, and word processors. As these tools become a standard part of daily life, scientists have grown concerned about their potential to shape human cognition.

Previous studies have shown that artificial intelligence can persuade people during direct interactions. This happens when a program generates a persuasive essay or directly debates a user on a specific topic. However, researchers wanted to explore a more subtle pathway for influence in our digital environments.

“There were two things that led my team and I to pursue the research question of whether being exposed to biased AI autocomplete suggestions could shift users’ attitudes on societal issues,” said study author Sterling Williams-Ceci, a PhD candidate at Cornell University and Merrill Presidential Scholar & Robert S. Harrison College Scholar.

“One was that we are surrounded by AI writing assistants that generate autocomplete suggestions in multiple contexts (e.g. Gmail, Google Docs, social media), but separate studies have shown that LLM-generated text can represent politically biased viewpoints; meanwhile, older psychology research showed that shifting how people behave through their writing can shift how they think about issues, so we suspected that these biased AI suggestions could trigger attitude shift through this mechanism.”

Because millions of people use the same text prediction models every day, even a minor shift in individual opinions could have broad societal implications. To test this idea, the researchers conducted two large-scale online experiments involving a total of 2,582 participants. They built a custom writing application that functioned much like a standard word processor.

In both experiments, participants were asked to write a short essay about a debatable topic. In the first experiment, which included 1,485 participants, everyone wrote about the use of standardized testing in education. Some participants wrote without any assistance, acting as a baseline control group.

Others were provided with autocomplete suggestions generated by the artificial intelligence model GPT-3.5. These suggestions were specifically programmed to favor standardized testing. As participants typed, short phrases of about 24 words would appear on the screen, and the users could accept them into their essays by pressing the tab key.

Google News Preferences Add PsyPost to your preferred sources

To rule out the possibility that the mere presence of new information caused any opinion changes, a third group in the first experiment did not use the autocomplete tool. Instead, they were shown a static list of the artificial intelligence program’s arguments before they began writing. After the writing task, all participants filled out a survey measuring their final opinions on the topic alongside several unrelated distraction topics.

Distractor questions are used in psychology to hide the true purpose of a study. This prevents participants from guessing what the scientists are looking for and unnaturally altering their responses.

The researchers found that participants who used the biased autocomplete tool reported attitudes that were closer to the artificial intelligence’s programmed bias. Their opinions shifted by nearly half a point on a five-point scale compared to the control group. This shift occurred even among the roughly thirty percent of participants who did not actually accept any suggested words into their essays.

The scientists also noticed that the interactive autocomplete feature had a stronger effect than simply reading the same arguments presented as a static list. This provides evidence that the unique experience of co-writing with an artificial intelligence program is a distinct and potent form of influence. It suggests the act of typing alongside the program shapes our thoughts more than just reading the text.

“AI assistants that provide these autocomplete suggestions can make us write easier and quicker, but that there are implications: they change the type of language we use; the topics we write about, and, as we show here, can also shift how we actually think about the issues we are communicating about,” Williams-Ceci told PsyPost. “We found that attitudes shifted even including participants who did not actually accept the suggestions to fill in their writing, so mere exposure to the suggestions may be enough even if people resist using them.”

In the second experiment, involving 1,097 participants, the researchers measured people’s baseline opinions weeks before the actual writing task. This allowed the scientists to track exactly how much an individual’s attitude shifted over time. Participants in this experiment were randomly assigned to write about one of four topics: the death penalty, felon voting rights, genetically modified organisms, or fracking.

The artificial intelligence tool, this time using the more advanced GPT-4 model, was programmed to provide suggestions leaning either conservative or liberal depending on the topic. Once again, the researchers found that participants’ attitudes shifted from their original baseline positions toward the artificial intelligence’s biased perspective. The control group experienced no such shift.

The researcher observed a lack of awareness among the participants. The majority of people exposed to the biased suggestions said that the artificial intelligence was reasonable and balanced. Most participants completely disagreed with the idea that the writing assistant had influenced their thinking or their arguments.

The researchers even attempted to mitigate this effect in the second experiment by explicitly warning participants about the tool’s bias. Some individuals were warned before they started writing, while others were debriefed immediately afterward. Neither of these interventions reduced the extent to which the participants’ attitudes shifted.

“We were very surprised to find out that warning people ahead of when they were exposed to the biased AI suggestions failed to mitigate the attitude shift they exhibited,” Williams-Ceci explained. “Our first experiment showed that people most often did not recognize the bias in the suggestions or their influence, so we hypothesized in our second experiment that simply alerting people to the fact that the suggestions had a bias would make them less likely to be influenced.”

“We also hypothesized this mitigation effect because similar interventions have shown success in the literature on misinformation prevention. However, in our second experiment, neither warning people before nor debriefing them after made any dent in the attitude shift they experienced.”

While the study provides strong evidence of this covert influence, there are some limitations to consider. The research only measured the short-term effects of using a biased writing assistant. It remains unclear if this attitude shift persists over weeks or months, or if repeated exposure over a long period might compound the effect.

“One limitation that is important to note is that our experiments were not designed to pinpoint a specific cognitive mechanism to explain why writing with the biased AI suggestions shifted people’s attitudes,” Williams-Ceci noted. “We know that it had something to do with the fact that these suggestions led people to write about their views in more biased ways — because of the psychology research showing that behavior can influence attitudes — but there have been multiple theoretical explanations for why manipulating people’s writing can shift their attitudes.”

Potential mechanisms include “a cognitive dissonance reaction where people consciously adjusted their self-reported attitude to align with what they had written, or a self-perception theory argument that people inferred their true attitudes from what they had written, or even a biased scanning argument that the biased viewpoints became more accessible in people’s working memory.”

“If future research can pinpoint the exact reason why this attitude shift is occurring, then we can hopefully find interventions that are more effective in preserving people’s autonomy,” Williams-Ceci continued.

“Our team is interested in learning more about the mechanism behind the attitude shift, as well as ways to prevent or mitigate it. It is alarming that telling people about the bias in the AI suggestions didn’t reliably reduce the extent of the influence; we wonder if people need to be confronted with interventions in the moment, alongside the biased suggestions, in order for these interventions to work.”

The study, “Biased AI writing assistants shift users’ attitudes on societal issues,” was authored by Sterling Williams-Ceci, Maurice Jakesch, Advait Bhat, Kowe Kadoma, Lior Zalmanson, and Mor Naaman.

Previous Post

The neuroscience of hypocrisy points to a communication breakdown in the brain

Next Post

Psychology researchers have determined the best time to text after a first date

RELATED

Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Can a common parasite medication calm the brain’s stress circuitry during alcohol withdrawal?

Childhood trauma and attachment styles show nuanced links to alternative sexual preferences

New study reveals how political bias conditions the impact of conspiracy thinking

Cognition might emerge from embodied “grip” with the world rather than abstract mental processes

Men and women show different relative cognitive strengths across their lifespans

Early exposure to forever chemicals linked to altered brain genes and impulsive behavior in rats

Soft brain implants outperform rigid silicon in long-term safety study

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc