Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

by Eric W. Dolan
June 29, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Follow PsyPost on Google News

A new study published in Communication Reports has found that readers interpret the involvement of artificial intelligence in news writing in varied and often inaccurate ways. When shown news articles with different byline descriptions—some noting that the article was written with or by AI—participants offered a wide range of explanations about what that meant. Most did not see AI as acting independently. Instead, they constructed stories in their minds to explain how AI and human writers might have worked together.

As AI technologies become more integrated into journalism, understanding how people interpret AI’s role becomes increasingly important. Generative artificial intelligence refers to tools that can produce human-like text, images, or audio based on prompts or data. In journalism, this often means AI is used to summarize information, generate headlines, or even write full articles based on structured data.

Since 2014, some newsrooms have used AI to automate financial and sports stories. But the release of more advanced tools, such as ChatGPT in late 2022, has expanded the possibilities and made AI much more visible in everyday news production. For example, in 2023, a large media company in the United Kingdom hired reporters whose work includes AI assistance and noted this in their bylines. However, readers are not always told exactly how AI contributed, which can create confusion or suspicion.

The researchers behind the new study wanted to know how people understand bylines that mention AI and whether their interpretations are influenced by their familiarity with media and attitudes toward artificial intelligence. They were especially interested in whether people could accurately infer what AI did during the creation of a news article just based on the wording in the byline. This is important because trust in journalism depends on transparency, and previous controversies—such as Sports Illustrated being accused of using AI-generated content without disclosure—have shown that unclear authorship can damage credibility.

To explore these questions, the research team designed an online study involving 269 adult participants. The sample closely reflected the U.S. population in terms of age, gender, and ethnicity. Participants were recruited through Prolific, an online platform often used for social science research, and were paid for their time. After giving consent, participants completed a short questionnaire measuring their media literacy and general attitudes toward artificial intelligence. Then, each person was randomly assigned to read a slightly edited Associated Press article about a health story. The article was the same for everyone, except for one line at the top—the byline.

This byline varied in five different ways: some said the story was written by a “staff writer,” while others said it was written “by staff writer with AI tool,” “with AI assistance,” “with AI collaboration,” or simply “by AI.” After reading the article, participants were asked to explain what they thought the byline meant and how they interpreted the role of AI in writing the article.

The responses showed that readers tried to make sense of the byline even when it wasn’t entirely clear. This act of constructing meaning from limited information is known as “sensemaking”—a process where people use what they already know or believe to understand something new or ambiguous. In this case, people relied on their personal experiences, assumptions about journalism, and existing knowledge of AI.

Many participants assumed that AI helped in some way, even if they couldn’t say exactly how. Some thought the AI wrote most of the article, with a human editor stepping in to clean things up. Others believed that a human wrote the bulk of the article, but used AI for smaller tasks, such as checking facts or suggesting better wording.

One person imagined the journalist typed in a few keywords, and AI pulled together text from the internet to generate the article. Another described a collaborative effort where AI gathered background information, and the human writer then evaluated its accuracy. These mental models—often called “folk theories”—illustrate how readers try to fill in the gaps when information is missing or vague.

Interestingly, even when the byline said the article was written “by AI,” many participants still assumed a human had been involved in some way. This suggests that most people do not see AI as a fully independent writer. Instead, they believe human oversight is necessary, whether for guidance, supervision, or final editing.

Some participants expressed skepticism or even frustration with the byline. When the article said it was written by a “staff writer,” but didn’t include a name, some assumed that this was an attempt to hide the fact that AI had actually written it. Others said the writing quality was poor, and attributed that to AI involvement—even when the article had been described as written by a human. In both cases, the absence of a named author led to negative judgments. This finding supports earlier research showing that readers expect transparency in authorship, and when those expectations are not met, they may distrust the content.

To further understand what influenced these interpretations, the researchers grouped participants based on their media literacy and their general attitudes toward AI. Media literacy refers to how well people understand the media they consume, including how news is produced.

The researchers found that participants with higher media literacy were more likely to believe that AI had done most of the writing. Those with lower media literacy were more likely to assume that a human wrote the article, or that the work was a human-AI collaboration. Surprisingly, prior attitudes toward AI did not significantly affect how participants interpreted the byline.

This suggests that how much people know about the media may matter more than how they feel about artificial intelligence when trying to figure out who wrote a story. It also shows that simply including a phrase like “with AI assistance” is not enough to give readers a clear understanding of AI’s role. The study found that people often misinterpret or overthink these statements, and the lack of standard language around AI involvement only adds to the confusion.

The study has some limitations. Because the researchers did not include a named author in any of the byline conditions, it’s possible that participants reacted negatively because they missed seeing a real person’s name—something they expect from journalism. It’s also worth noting that the article used in the study was based on science reporting, which tends to be more objective and less interpretive. Reactions to AI involvement might be stronger for topics like politics or opinion writing. Future studies could explore how these findings apply to other types of journalism and examine how people respond when articles include a full disclosure or transparency statement about AI use.

Despite these limitations, the study raises important questions for news organizations. As AI becomes more common in the newsroom, it is not enough to say that a story was produced “with AI.” Readers want to know what exactly the AI did—did it write the first draft, summarize data, suggest edits, or merely spellcheck the final copy? Without this clarity, readers are left to guess, and those guesses often lean toward suspicion or confusion.

The researchers argue that greater transparency is needed, not only as a matter of ethics but as a way to maintain trust in journalism. According to guidelines from the Society of Professional Journalists, journalists are expected to explain their processes and decisions to the public. This expectation should extend to AI use. As with human sources, AI contributions need to be clearly cited and described.

The study, “Who Wrote It? News Readers’ Sensemaking of AI/Human Bylines,” was authored by Steve Bien-Aimé, Mu Wu, Alyssa Appelman, and Haiyan Jia.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Scientists discover weak Dems have highest testosterone — but there’s an intriguing twist

Can sunshine make you happier? A massive study offers a surprising answer

New study links why people use pornography to day-to-day couple behavior

Virtual reality meditation eases caregiver anxiety during pediatric hospital stays, with stronger benefits for Spanish speakers

Fascinating new advances in psychedelic science reveal how they may heal the mind

Dysfunction within the sensory processing cortex of the brain is associated with insomnia, study finds

Prenatal exposure to “forever chemicals” linked to autistic traits in children, study finds

Ketamine repairs reward circuitry to reverse stress-induced anhedonia

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy