Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds

by Eric W. Dolan
April 16, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

A new study published in PNAS Nexus shows that generative artificial intelligence has already been adopted in real-world disinformation campaigns. By analyzing a Russian-backed propaganda site, researchers found that AI tools significantly increased content production without diminishing the perceived credibility or persuasiveness of the messaging.

The rapid development of generative artificial intelligence has sparked growing concern among experts, policymakers, and the public. One of the biggest worries is that these tools will make it easier for malicious actors to produce disinformation at scale. While earlier research has demonstrated the persuasive potential of AI-generated text in controlled experiments, real-world evidence has been scarce—until now.

Generative AI refers to algorithms capable of producing human-like text, images, or other forms of media. Tools like large language models can write news articles, summarize documents, and even mimic particular styles or ideological tones. While these tools have many legitimate applications, their potential misuse in propaganda campaigns has sparked serious debate. The authors of the study set out to determine whether these concerns are reflected in real-world behavior by examining the practices of state-backed media manipulation efforts.

“My work primarily focuses on the use (and abuse) of digital technologies by motivated actors. While large language models (LLMs) are only the newest tool which has been repurposed to promote state interests, it is not the first and it will not be the last,” said study author Morgan Wack, a postdoctoral scholar at the University of Zurich’s Department of Communication and Media Research.

“Each novel technology presents distinct challenges that need to be comprehensively identified before effective interventions can be designed to mitigate their impacts. In the case of LLMs, the potential threat has not gone unnoticed, with several researchers and policymakers having sounded the alarm regarding their potential for abuse.”

“However, the covert nature of propaganda campaigns poses challenges for researchers interested in determining how concerned the public should be at any given moment. Drawing on work from colleagues at the Media Forensics Hub, our team wanted to conduct a study which would conclusively show that LLMs do not just have the potential to alter digital propaganda campaigns, but to show that this these changes are already taking place.”

The research centered on DC Weekly, a website exposed in a BBC report as part of a coordinated Russian operation. Although the site appeared to cater to an American audience, it was fabricated using fictional bylines and professional publishing templates. Much of its early content was directly lifted from other far-right or state-run media outlets, often with minimal editing. But beginning in September 2023, the content began to change. Articles started to feature unique phrasing, and some even included prompt remnants from OpenAI’s GPT-3 model, indicating that AI had been integrated into the site’s editorial process.

To study this transition, the researchers scraped the entire archive of DC Weekly articles posted between April and November 2023, totaling nearly 23,000 stories. They pinpointed September 20, 2023, as the likely start of AI use, based on the appearance of leaked language model prompts embedded in articles. From this point onward, the site no longer simply copied articles. Instead, it rewrote them in original language, while maintaining the same underlying facts and media. The researchers were able to trace many of these AI-generated articles back to source content from outlets like Fox News or Russian state media.

After adopting AI tools, the outlet more than doubled its daily article production. Statistical models confirmed that this increase was unlikely to be a coincidence. The researchers also found evidence that AI was used not just for writing, but also for selecting and framing content. Some prompt leaks showed the AI being asked to rewrite articles with specific ideological slants, such as criticizing U.S. support for Ukraine or favoring Republican political figures.

But volume was only part of the story. The researchers also looked at how AI adoption affected the diversity of content topics. Before using AI, DC Weekly focused on a narrow range of hyperpartisan sources. Afterward, it began drawing from a wider array of outlets, covering more varied topics such as crime, international affairs, and gun violence. Using machine learning models, the team quantified this change and found that topic diversity nearly doubled in the post-AI period. This suggests that AI tools helped the propagandists make the site appear more like a legitimate news outlet, with a broader editorial scope.

Another important question was whether this shift in production strategy came at the cost of persuasiveness or credibility. To find out, the researchers conducted a survey experiment with 880 American adults. Participants were randomly assigned to read articles either from before or after the site began using AI. All of the articles focused on Russia’s invasion of Ukraine, ensuring that topic changes would not skew results.

After reading, participants rated both how much they agreed with the article’s thesis and how credible they found the website. The results showed no significant differences between the AI-generated and human-curated content. Articles from both periods were persuasive, and readers found the website equally credible regardless of whether the content had been produced using AI.

“One surprise was that despite the huge increase in quantity, AI allowed this operation to produce more articles on more topics, the persuasiveness and perceived credibility of those articles did not experience a drop-off,” Wack told PsyPost. “We might have expected more “sloppy” content given increases in volume, but readers actually found the AI-enabled articles just as plausible and credible. This finding illustrates how generative tools have already begun to flood information channels while still appearing authentic.”

This finding adds to earlier experimental research suggesting that generative AI can create content that seems believable and convincing. In the context of a real-world disinformation campaign, these capabilities may offer propagandists a powerful way to expand their operations without sacrificing impact.

“The main takeaway we hope readers get from our study is that generative AI can already be used by propagandists to produce large amounts of misleading or manipulative content efficiently,” Wack explained. “We found that AI-augmented articles were just as persuasive and credible in the eyes of readers as those produced by more traditional methods, and that switching to AI allowed the group in question to dramatically expand both its production rate and the range of topics it covered.”

“What this means is that your readers and the general public need to be more vigilant than ever. Especially when encountering partisan content, whether it be on social media or a seemingly ‘local’ news website, viewers should be aware that the authors on the other end may not be people presenting genuine opinions. In fact, they may not be people at all.”

The authors acknowledged some limitations. Since the analysis focused on a single campaign, it’s unclear whether the findings generalize to other influence operations.

“These results come from the examination of a specific campaign,” Wack noted. “While, as noted in the article, we suspect the specific focus of this campaign likely makes our outcomes an understatement of the impact of the use of AI in the development and dissemination of disinformation, our results are contingent and do not apply to alternative campaigns run in other contexts.”

The researchers also noted that while many signs point to AI being responsible for the increase in content production and diversity, other behind-the-scenes changes could have occurred around the same time. Without direct access to the operators behind the campaign, it’s difficult to fully disentangle the effects.

Even so, the study offers compelling evidence that generative AI is already being used to bolster disinformation campaigns. It shows that AI can make propaganda more scalable, efficient, and sophisticated, without compromising its ability to sway public opinion. These findings have major implications for how societies think about AI governance, information security, and media literacy.

“As these technologies continue to improve, both identification and mitigation of their use for the development of disinformation are going to become increasingly difficult,” Wack told PsyPost. “In order to keep the public informed of the use of AI and LLMs to manipulate the public we aim to continue to conduct research which helps to inform the public as well as the development of data-informed policies which proactively take these evolving challenges into consideration.”

The study, “Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign,” was authored by Morgan Wack, Carl Ehrett, Darren Linvill, and Patrick Warren.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots often misrepresent scientific studies — and newer models may be worse

May 20, 2025

AI-driven summaries of scientific studies may be misleading the public. A new study found that most leading language models routinely produce overgeneralized conclusions, with newer versions performing worse than older ones—even when explicitly prompted to avoid inaccuracies.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Artificial confidence? People feel more creative after viewing AI-labeled content

May 16, 2025

A new study suggests that when people see creative work labeled as AI-generated rather than human-made, they feel more confident in their own abilities. The effect appears across jokes, drawings, poems, and more—and might stem from subtle social comparison processes.

Read moreDetails
AI-driven brain training reduces impulsiveness in kids with ADHD, study finds
ADHD

AI-driven brain training reduces impulsiveness in kids with ADHD, study finds

May 9, 2025

Researchers found that a personalized, game-based cognitive therapy powered by artificial intelligence significantly reduced impulsiveness and inattentiveness in children with ADHD. Brain scans showed signs of neurological improvement, highlighting the potential of AI tools in mental health treatment.

Read moreDetails
Neuroscientists use brain implants and AI to map language processing in real time
Artificial Intelligence

Neuroscientists use brain implants and AI to map language processing in real time

May 9, 2025

Researchers recorded brain activity during unscripted conversations and compared it to patterns in AI language models. The findings reveal a network of brain areas that track speech meaning and speaker transitions, offering a detailed picture of how we communicate.

Read moreDetails
Artificial intelligence: 7 eye-opening new scientific discoveries
Artificial Intelligence

Artificial intelligence: 7 eye-opening new scientific discoveries

May 8, 2025

As artificial intelligence becomes more advanced, researchers are uncovering both how these systems behave and how they influence human life. These seven recent studies offer insights into the psychology of AI—and what happens when humans and machines interact.

Read moreDetails
New study: AI can identify autism from tiny hand motion patterns
Artificial Intelligence

New study: AI can identify autism from tiny hand motion patterns

May 8, 2025

Hand movements during a basic grasping task can help identify autism, new research suggests. The study used motion tracking and machine learning to analyze finger movements and found that classification accuracy exceeded 84% using just two sensors.

Read moreDetails
Cognitive psychologist explains why AI images fool so many people
Artificial Intelligence

Cognitive psychologist explains why AI images fool so many people

May 7, 2025

Despite glaring errors, many AI-generated images go undetected by casual viewers. A cognitive psychologist explores how attention, perception, and mental shortcuts shape what we notice—and what we miss—while scrolling through our increasingly synthetic digital feeds.

Read moreDetails
Are AI lovers replacing real romantic partners? Surprising findings from new research
Artificial Intelligence

Are AI lovers replacing real romantic partners? Surprising findings from new research

May 4, 2025

Falling in love with a virtual character might change how people feel about real-life marriage. A recent study found that these digital romances can both dampen and strengthen marriage intentions, depending on the emotional and psychological effects involved.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

What brain scans reveal about the neural correlates of pornography consumption

AI chatbots often misrepresent scientific studies — and newer models may be worse

Is gender-affirming care helping or harming mental health?

Study finds “zombie” neurons in the peripheral nervous system contribute to chronic pain

Therapeutic video game shows promise for post-COVID cognitive recovery

Passive scrolling linked to increased anxiety in teens, study finds

Your bodily awareness guides your morality, new neuroscience study suggests

Where you flirt matters: New research shows setting shapes romantic success

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy