Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds

by Eric W. Dolan
April 16, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in PNAS Nexus shows that generative artificial intelligence has already been adopted in real-world disinformation campaigns. By analyzing a Russian-backed propaganda site, researchers found that AI tools significantly increased content production without diminishing the perceived credibility or persuasiveness of the messaging.

The rapid development of generative artificial intelligence has sparked growing concern among experts, policymakers, and the public. One of the biggest worries is that these tools will make it easier for malicious actors to produce disinformation at scale. While earlier research has demonstrated the persuasive potential of AI-generated text in controlled experiments, real-world evidence has been scarce—until now.

Generative AI refers to algorithms capable of producing human-like text, images, or other forms of media. Tools like large language models can write news articles, summarize documents, and even mimic particular styles or ideological tones. While these tools have many legitimate applications, their potential misuse in propaganda campaigns has sparked serious debate. The authors of the study set out to determine whether these concerns are reflected in real-world behavior by examining the practices of state-backed media manipulation efforts.

“My work primarily focuses on the use (and abuse) of digital technologies by motivated actors. While large language models (LLMs) are only the newest tool which has been repurposed to promote state interests, it is not the first and it will not be the last,” said study author Morgan Wack, a postdoctoral scholar at the University of Zurich’s Department of Communication and Media Research.

“Each novel technology presents distinct challenges that need to be comprehensively identified before effective interventions can be designed to mitigate their impacts. In the case of LLMs, the potential threat has not gone unnoticed, with several researchers and policymakers having sounded the alarm regarding their potential for abuse.”

“However, the covert nature of propaganda campaigns poses challenges for researchers interested in determining how concerned the public should be at any given moment. Drawing on work from colleagues at the Media Forensics Hub, our team wanted to conduct a study which would conclusively show that LLMs do not just have the potential to alter digital propaganda campaigns, but to show that this these changes are already taking place.”

The research centered on DC Weekly, a website exposed in a BBC report as part of a coordinated Russian operation. Although the site appeared to cater to an American audience, it was fabricated using fictional bylines and professional publishing templates. Much of its early content was directly lifted from other far-right or state-run media outlets, often with minimal editing. But beginning in September 2023, the content began to change. Articles started to feature unique phrasing, and some even included prompt remnants from OpenAI’s GPT-3 model, indicating that AI had been integrated into the site’s editorial process.

To study this transition, the researchers scraped the entire archive of DC Weekly articles posted between April and November 2023, totaling nearly 23,000 stories. They pinpointed September 20, 2023, as the likely start of AI use, based on the appearance of leaked language model prompts embedded in articles. From this point onward, the site no longer simply copied articles. Instead, it rewrote them in original language, while maintaining the same underlying facts and media. The researchers were able to trace many of these AI-generated articles back to source content from outlets like Fox News or Russian state media.

Google News Preferences Add PsyPost to your preferred sources

After adopting AI tools, the outlet more than doubled its daily article production. Statistical models confirmed that this increase was unlikely to be a coincidence. The researchers also found evidence that AI was used not just for writing, but also for selecting and framing content. Some prompt leaks showed the AI being asked to rewrite articles with specific ideological slants, such as criticizing U.S. support for Ukraine or favoring Republican political figures.

But volume was only part of the story. The researchers also looked at how AI adoption affected the diversity of content topics. Before using AI, DC Weekly focused on a narrow range of hyperpartisan sources. Afterward, it began drawing from a wider array of outlets, covering more varied topics such as crime, international affairs, and gun violence. Using machine learning models, the team quantified this change and found that topic diversity nearly doubled in the post-AI period. This suggests that AI tools helped the propagandists make the site appear more like a legitimate news outlet, with a broader editorial scope.

Another important question was whether this shift in production strategy came at the cost of persuasiveness or credibility. To find out, the researchers conducted a survey experiment with 880 American adults. Participants were randomly assigned to read articles either from before or after the site began using AI. All of the articles focused on Russia’s invasion of Ukraine, ensuring that topic changes would not skew results.

After reading, participants rated both how much they agreed with the article’s thesis and how credible they found the website. The results showed no significant differences between the AI-generated and human-curated content. Articles from both periods were persuasive, and readers found the website equally credible regardless of whether the content had been produced using AI.

“One surprise was that despite the huge increase in quantity, AI allowed this operation to produce more articles on more topics, the persuasiveness and perceived credibility of those articles did not experience a drop-off,” Wack told PsyPost. “We might have expected more “sloppy” content given increases in volume, but readers actually found the AI-enabled articles just as plausible and credible. This finding illustrates how generative tools have already begun to flood information channels while still appearing authentic.”

This finding adds to earlier experimental research suggesting that generative AI can create content that seems believable and convincing. In the context of a real-world disinformation campaign, these capabilities may offer propagandists a powerful way to expand their operations without sacrificing impact.

“The main takeaway we hope readers get from our study is that generative AI can already be used by propagandists to produce large amounts of misleading or manipulative content efficiently,” Wack explained. “We found that AI-augmented articles were just as persuasive and credible in the eyes of readers as those produced by more traditional methods, and that switching to AI allowed the group in question to dramatically expand both its production rate and the range of topics it covered.”

“What this means is that your readers and the general public need to be more vigilant than ever. Especially when encountering partisan content, whether it be on social media or a seemingly ‘local’ news website, viewers should be aware that the authors on the other end may not be people presenting genuine opinions. In fact, they may not be people at all.”

The authors acknowledged some limitations. Since the analysis focused on a single campaign, it’s unclear whether the findings generalize to other influence operations.

“These results come from the examination of a specific campaign,” Wack noted. “While, as noted in the article, we suspect the specific focus of this campaign likely makes our outcomes an understatement of the impact of the use of AI in the development and dissemination of disinformation, our results are contingent and do not apply to alternative campaigns run in other contexts.”

The researchers also noted that while many signs point to AI being responsible for the increase in content production and diversity, other behind-the-scenes changes could have occurred around the same time. Without direct access to the operators behind the campaign, it’s difficult to fully disentangle the effects.

Even so, the study offers compelling evidence that generative AI is already being used to bolster disinformation campaigns. It shows that AI can make propaganda more scalable, efficient, and sophisticated, without compromising its ability to sway public opinion. These findings have major implications for how societies think about AI governance, information security, and media literacy.

“As these technologies continue to improve, both identification and mitigation of their use for the development of disinformation are going to become increasingly difficult,” Wack told PsyPost. “In order to keep the public informed of the use of AI and LLMs to manipulate the public we aim to continue to conduct research which helps to inform the public as well as the development of data-informed policies which proactively take these evolving challenges into consideration.”

The study, “Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign,” was authored by Morgan Wack, Carl Ehrett, Darren Linvill, and Patrick Warren.

Previous Post

People with intellectual humility tend to handle relationship conflicts better, new study finds

Next Post

Immune system may shape ADHD risk, new genetic study suggests

RELATED

Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Therapists test an AI dating simulator to help chronically single men practice romantic skills

March 9, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage
Artificial Intelligence

Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage

February 28, 2026
People with social anxiety more likely to become overdependent on conversational artificial intelligence agents
Artificial Intelligence

AI therapy is rated higher for empathy until people learn a machine wrote the text

February 26, 2026
New research: AI models tend to reflect the political ideologies of their creators
Artificial Intelligence

New research: AI models tend to reflect the political ideologies of their creators

February 26, 2026
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

AI and mental health: New research links use of ChatGPT to worsened psychiatric symptoms

February 24, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

How personality and culture relate to our perceptions of artificial intelligence

February 23, 2026
Young children are more likely to trust information from robots over humans
Artificial Intelligence

The presence of robot eyes affects perception of mind

February 21, 2026

STAY CONNECTED

LATEST

Therapists test an AI dating simulator to help chronically single men practice romantic skills

Women with tattoos feel more attractive but experience the same body anxieties in the bedroom

Misophonia is strongly linked to a higher risk of mental health and auditory disorders

Brain scans reveal the unique brain structures linked to frequent lucid dreaming

Black Lives Matter protests sparked a short-term conservative backlash but ultimately shifted the 2020 election towards Democrats

Massive global study links the habit of forgiving others to better overall well-being

Neuroscientists have pinpointed a potential biological signature for psychopathy

Supportive relationships are linked to positive personality changes

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc