Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Generative AI simplifies science communication, boosts public trust in scientists

by Eric W. Dolan
January 15, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Follow PsyPost on Google News

AI-generated summaries of scientific studies are not only simpler and more accessible to general readers but also improve public perceptions of scientists’ trustworthiness and credibility, according to recent research published in PNAS Nexus. By comparing traditional summaries written by researchers with AI-generated versions, the study highlights how AI can enhance the public’s understanding of scientific information while fostering positive attitudes toward science.

Large language models, such as ChatGPT, are advanced artificial intelligence systems trained on vast amounts of text data to understand and generate human-like language. These models rely on deep learning techniques, specifically neural networks, to analyze patterns in text and predict the most likely sequences of words based on input. The primary strength of these models lies in their ability to process and produce natural language, making them useful for tasks like text summarization, translation, and answering questions.

“I was interested in this topic because many social scientific studies on AI (at the time) were focused on benchmarking AI performance against human performance for a range of tasks (e.g., how well could AI models respond to surveys or behavioral tasks like humans),” said David M. Markowitz, an associate professor of communication at Michigan State University.

“I wanted to take this a step further to see how AI could enhance and improve consequential aspects of everyday life, not just meet human-like performance. Therefore, I studied how people appraise and make sense of scientific information. The results of this paper largely suggest AI can make scientific information simpler for people, and this can have downstream positive impacts for how humans think about scientists, research, and their understanding of scientific information.”

Markowitz first examined whether lay summaries of scientific articles are simpler than the corresponding scientific abstracts. He focused on articles published in the Proceedings of the National Academy of Sciences (PNAS), a highly regarded journal that provides both technical summaries (abstracts) and lay summaries (significance statements), compiling a dataset of over 34,000 articles that included both types of summaries.

The texts were analyzed using an automated linguistic tool called Linguistic Inquiry and Word Count (LIWC). This software evaluates texts on various linguistic dimensions, including the use of common words, writing style, and readability. Common words were identified as those frequently used in everyday language, which tend to make texts more accessible. Writing style was measured by analyzing how formal or analytical the text appeared, while readability scores considered the length of sentences and the complexity of vocabulary.

The analysis revealed that lay summaries were indeed simpler than scientific abstracts. They contained more common words, shorter sentences, and a less formal writing style. But these differences, while statistically significant, were relatively small. This raised questions about whether lay summaries were simplified enough for non-expert readers to fully understand the scientific content. The findings highlighted a potential opportunity for further simplification, which led to the exploration of AI-generated summaries in the next phase of the study.

Markowitz next conducted an experiment to evaluate whether AI-generated summaries could simplify scientific communication more effectively than human-authored lay summaries. He selected 800 scientific abstracts from the PNAS dataset and used a popular generative AI model, ChatGPT-4, to create corresponding summaries. The AI was instructed to write concise, clear significance statements, following the guidelines provided to PNAS authors for writing lay summaries.

Participants for Study 2 were recruited online and included a diverse sample of 274 individuals from the United States. Each participant was presented with summaries from both AI and human authors, but the pairings were randomized to avoid bias. After reading the summaries, participants were asked to evaluate the credibility, trustworthiness, and intelligence of the authors. They were also asked to assess the complexity and clarity of the summaries and to indicate whether they believed each summary was written by a human or AI.

The results showed that AI-generated summaries were perceived as simpler, clearer, and easier to understand compared to human-authored ones. Participants rated the authors of AI-generated summaries as more trustworthy and credible but slightly less intelligent than the authors of human-written summaries. Interestingly, participants were more likely to mistake AI-generated summaries for human work, while associating the more complex human-written summaries with AI.

These findings demonstrated the potential of generative AI to enhance science communication by making it more accessible and fostering positive perceptions of scientists. However, the slight reduction in perceived intelligence highlighted a trade-off between simplicity and expertise.

In his third and final study, Markowitz expanded on Study 2 by examining not only public perceptions of AI-generated summaries but also their impact on comprehension. The experiment included a larger and more diverse set of stimuli, with 250 participants reading summaries from 20 pairs of AI and human-authored texts. Each participant was randomly assigned to five pairs of summaries, with one summary from each pair being AI-generated and the other human-written.

After reading each summary, participants were asked to answer a multiple-choice question about the scientific content to test their understanding. They were also asked to summarize the main findings of the research in their own words. The summaries provided by participants were evaluated for accuracy and detail using an independent coding process.

Markowitz found that participants demonstrated better comprehension when reading AI-generated summaries. They were more likely to correctly answer multiple-choice questions and provided more detailed and accurate summaries of the scientific content. These results indicated that the linguistic simplicity of AI-generated summaries facilitated deeper understanding and better retention of information.

As in Study 2, participants rated AI-generated summaries as clearer and less complex than human-written ones. However, the perception of authors’ intelligence remained lower for AI-generated texts, and there were no significant differences in credibility and trustworthiness ratings between the two types of summaries.

“I hope that the average person recognizes the positive impact of simplicity in everyday life,” Markowitz told PsyPost. “Simple language feels better to most people than complex language and therefore, I hope this work also suggests that people should demand more from scientists to make their work more approachable, linguistically. Complex ideas do not necessarily need to be communicated in a complex manner.”

While the results are promising, the study has limitations. The data were drawn from a single journal, PNAS, which may not represent all scientific fields or publication practices. Future studies could expand the scope to include a variety of journals and disciplines to confirm the generalizability of these findings.

Future research could explore how AI-generated summaries perform across different scientific domains, the long-term effects on public scientific literacy, and ways to balance simplicity with nuance. It could also examine whether integrating AI tools into the writing process improves the quality of scientific communication from the outset.

The study, “From complexity to clarity: How AI enhances perceptions of scientists and the public’s understanding of science,” was published September 2024.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

May 30, 2025

A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.

Read moreDetails
AI can predict intimate partner femicide from variables extracted from legal documents
Artificial Intelligence

Being honest about using AI can backfire on your credibility

May 29, 2025

New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Exposure to heavy metals is associated with higher likelihood of ADHD diagnosis

Eye-tracking study shows people fixate longer on female aggressors than male ones

Romantic breakups follow a two-stage decline that begins years before the split, study finds

Believing “news will find me” is linked to sharing fake news, study finds

A common parasite not only invades the brain — it can also decapitate human sperm

Almost all unmarried pregant women say that the fetus resembles the father, study finds

New neuroscience research reveals brain antioxidant deficit in depression

Scientists uncover kidney-to-brain route for Parkinson’s-related protein spread

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy