PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds

by Eric W. Dolan
April 16, 2025
Reading Time: 5 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A new study published in PNAS Nexus shows that generative artificial intelligence has already been adopted in real-world disinformation campaigns. By analyzing a Russian-backed propaganda site, researchers found that AI tools significantly increased content production without diminishing the perceived credibility or persuasiveness of the messaging.

The rapid development of generative artificial intelligence has sparked growing concern among experts, policymakers, and the public. One of the biggest worries is that these tools will make it easier for malicious actors to produce disinformation at scale. While earlier research has demonstrated the persuasive potential of AI-generated text in controlled experiments, real-world evidence has been scarce—until now.

Generative AI refers to algorithms capable of producing human-like text, images, or other forms of media. Tools like large language models can write news articles, summarize documents, and even mimic particular styles or ideological tones. While these tools have many legitimate applications, their potential misuse in propaganda campaigns has sparked serious debate. The authors of the study set out to determine whether these concerns are reflected in real-world behavior by examining the practices of state-backed media manipulation efforts.

“My work primarily focuses on the use (and abuse) of digital technologies by motivated actors. While large language models (LLMs) are only the newest tool which has been repurposed to promote state interests, it is not the first and it will not be the last,” said study author Morgan Wack, a postdoctoral scholar at the University of Zurich’s Department of Communication and Media Research.

“Each novel technology presents distinct challenges that need to be comprehensively identified before effective interventions can be designed to mitigate their impacts. In the case of LLMs, the potential threat has not gone unnoticed, with several researchers and policymakers having sounded the alarm regarding their potential for abuse.”

“However, the covert nature of propaganda campaigns poses challenges for researchers interested in determining how concerned the public should be at any given moment. Drawing on work from colleagues at the Media Forensics Hub, our team wanted to conduct a study which would conclusively show that LLMs do not just have the potential to alter digital propaganda campaigns, but to show that this these changes are already taking place.”

The research centered on DC Weekly, a website exposed in a BBC report as part of a coordinated Russian operation. Although the site appeared to cater to an American audience, it was fabricated using fictional bylines and professional publishing templates. Much of its early content was directly lifted from other far-right or state-run media outlets, often with minimal editing. But beginning in September 2023, the content began to change. Articles started to feature unique phrasing, and some even included prompt remnants from OpenAI’s GPT-3 model, indicating that AI had been integrated into the site’s editorial process.

To study this transition, the researchers scraped the entire archive of DC Weekly articles posted between April and November 2023, totaling nearly 23,000 stories. They pinpointed September 20, 2023, as the likely start of AI use, based on the appearance of leaked language model prompts embedded in articles. From this point onward, the site no longer simply copied articles. Instead, it rewrote them in original language, while maintaining the same underlying facts and media. The researchers were able to trace many of these AI-generated articles back to source content from outlets like Fox News or Russian state media.

Google News Preferences Add PsyPost to your preferred sources

After adopting AI tools, the outlet more than doubled its daily article production. Statistical models confirmed that this increase was unlikely to be a coincidence. The researchers also found evidence that AI was used not just for writing, but also for selecting and framing content. Some prompt leaks showed the AI being asked to rewrite articles with specific ideological slants, such as criticizing U.S. support for Ukraine or favoring Republican political figures.

But volume was only part of the story. The researchers also looked at how AI adoption affected the diversity of content topics. Before using AI, DC Weekly focused on a narrow range of hyperpartisan sources. Afterward, it began drawing from a wider array of outlets, covering more varied topics such as crime, international affairs, and gun violence. Using machine learning models, the team quantified this change and found that topic diversity nearly doubled in the post-AI period. This suggests that AI tools helped the propagandists make the site appear more like a legitimate news outlet, with a broader editorial scope.

Another important question was whether this shift in production strategy came at the cost of persuasiveness or credibility. To find out, the researchers conducted a survey experiment with 880 American adults. Participants were randomly assigned to read articles either from before or after the site began using AI. All of the articles focused on Russia’s invasion of Ukraine, ensuring that topic changes would not skew results.

After reading, participants rated both how much they agreed with the article’s thesis and how credible they found the website. The results showed no significant differences between the AI-generated and human-curated content. Articles from both periods were persuasive, and readers found the website equally credible regardless of whether the content had been produced using AI.

“One surprise was that despite the huge increase in quantity, AI allowed this operation to produce more articles on more topics, the persuasiveness and perceived credibility of those articles did not experience a drop-off,” Wack told PsyPost. “We might have expected more “sloppy” content given increases in volume, but readers actually found the AI-enabled articles just as plausible and credible. This finding illustrates how generative tools have already begun to flood information channels while still appearing authentic.”

This finding adds to earlier experimental research suggesting that generative AI can create content that seems believable and convincing. In the context of a real-world disinformation campaign, these capabilities may offer propagandists a powerful way to expand their operations without sacrificing impact.

“The main takeaway we hope readers get from our study is that generative AI can already be used by propagandists to produce large amounts of misleading or manipulative content efficiently,” Wack explained. “We found that AI-augmented articles were just as persuasive and credible in the eyes of readers as those produced by more traditional methods, and that switching to AI allowed the group in question to dramatically expand both its production rate and the range of topics it covered.”

“What this means is that your readers and the general public need to be more vigilant than ever. Especially when encountering partisan content, whether it be on social media or a seemingly ‘local’ news website, viewers should be aware that the authors on the other end may not be people presenting genuine opinions. In fact, they may not be people at all.”

The authors acknowledged some limitations. Since the analysis focused on a single campaign, it’s unclear whether the findings generalize to other influence operations.

“These results come from the examination of a specific campaign,” Wack noted. “While, as noted in the article, we suspect the specific focus of this campaign likely makes our outcomes an understatement of the impact of the use of AI in the development and dissemination of disinformation, our results are contingent and do not apply to alternative campaigns run in other contexts.”

The researchers also noted that while many signs point to AI being responsible for the increase in content production and diversity, other behind-the-scenes changes could have occurred around the same time. Without direct access to the operators behind the campaign, it’s difficult to fully disentangle the effects.

Even so, the study offers compelling evidence that generative AI is already being used to bolster disinformation campaigns. It shows that AI can make propaganda more scalable, efficient, and sophisticated, without compromising its ability to sway public opinion. These findings have major implications for how societies think about AI governance, information security, and media literacy.

“As these technologies continue to improve, both identification and mitigation of their use for the development of disinformation are going to become increasingly difficult,” Wack told PsyPost. “In order to keep the public informed of the use of AI and LLMs to manipulate the public we aim to continue to conduct research which helps to inform the public as well as the development of data-informed policies which proactively take these evolving challenges into consideration.”

The study, “Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign,” was authored by Morgan Wack, Carl Ehrett, Darren Linvill, and Patrick Warren.

RELATED

Psychology textbooks still misrepresent famous experiments and controversial debates
Artificial Intelligence

How eye contact shapes the believability of computer-generated faces

April 24, 2026
Facebook users who ruminate and compare themselves to their friends experience increased loneliness
Artificial Intelligence

Women perceive AI as riskier than men do, study finds

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Psychologists pinpoint the conversational mechanisms that help humans bond with AI

April 22, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Unrestricted generative AI harms high school math learning by acting as a crutch

April 21, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

April 20, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The age you start regularly watching adult content predicts your future mental health
  • New psychology research shows people consistently underestimate how often things go wrong across society
  • Short video addiction is linked to lower life satisfaction through loneliness and anxiety
  • Childhood trauma and attachment styles show nuanced links to alternative sexual preferences
  • Cognition might emerge from embodied “grip” with the world rather than abstract mental processes

Psychology of Selling

  • Five persuasive approaches and when each one works best for marketers
  • When salespeople feel free and connected to their boss, they’re less likely to quit
  • Want your brand to look premium? New research suggests making your logo less dynamic
  • The color trick that changes how you expect products to smell, taste, and feel
  • A new framework maps how influencers, brands, and platforms all compete for long-term value

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc