Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Humans create more novelty than ChatGPT when retelling stories

by Vladimir Hedrih
July 16, 2024
in Artificial Intelligence
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

A recent study explored the differences between how humans and ChatGPT retell stories. The researchers found that while ChatGPT provided concise summaries of original stories with minimal changes in further retellings, humans introduced significant variations and novel elements with each retelling. The paper was published in Scientific Reports.

ChatGPT is an advanced large language model developed by OpenAI that can generate human-like text based on the input it receives and utilizing a vast database of human knowledge it was trained on. This makes it very useful for answering questions, writing essays, providing conversational assistance, and many other purposes. ChatGPT is able to produce coherent and contextually relevant textual responses and this can be used to enhance productivity in many fields of human activity. By automating routine tasks and offering instant information, ChatGPT has the potential to free up human time for more complex and creative endeavors.

The introduction of ChatGPT and other artificial intelligence systems started a revolution in the way human economy and society functions. Their ability to handle large volumes of inquiries and provide personal assistance in a way that only humans could provide until recently enabled individuals to seek and obtain help and knowledge they need at any time. On the other hand, scientists are still exploring how including ChatGPT in communication systems that were completely reliant on humans just a couple of years ago changes the nature of those communications.

Study author Fritz Breithaupt and his colleagues wanted to explore how ChatGPT retells stories compared to how humans do it. They focused on two aspects of retelling stories – the stability of language and affect preservation. Affect preservation refers to whether the retold version of the story maintains the same emotional tone of the situation that was present in the original. The stability of language refers to whether words, concepts, and grammatical constructions from the original are preserved in the retold version.

The study involved participants from Amazon Mechanical Turk (MTurk) who were asked to write short stories of approximately 120-160 words, categorized as happy, mildly happy, mildly sad, or sad (without using explicit emotional words like “happy” or “sad”).

Subsequently, 348 other participants were tasked with retelling these stories or their retold versions. Each participant retold three different stories, resulting in a chain of retellings where each story was retold three times by different individuals. This process created 116 original stories, each with three retellings. Finally, 537 participants rated these stories for their emotional content and other characteristics.

ChatGPT 3 was also used to retell the same stories using identical instructions. To ensure fairness, different ChatGPT accounts were employed for each retelling in the chain, preventing the model from accessing previous retellings. The resulting retellings were then rated by 531 individuals recruited via Prolific.

The results showed that both ChatGPT and humans significantly shortened the stories in their retellings. However, ChatGPT’s retellings were substantially shorter right from the first iteration, with only slight decreases in length in subsequent retellings. Humans, on the other hand, progressively reduced the text length with each retelling, displaying greater variability in the retellings’ lengths.

Analysis of the language used in retellings revealed notable differences. ChatGPT maintained more nouns, adjectives, and prepositions, using words typically acquired later in life. In contrast, humans employed more verbs, adverbs, and negations, favoring language acquired at a younger age. This suggests that human retellings focus more on actions and emotions, while ChatGPT emphasizes descriptions and entities.

Both ChatGPT and humans effectively preserved the emotional tone of the original stories in their retellings. This ability to maintain the core emotional impact highlights a significant similarity between human and machine storytelling, despite differences in how they achieve it.

One of the key findings was that humans displayed ongoing creativity in their retellings, introducing new words and concepts with each iteration. This incremental reduction and addition of novel elements contrast sharply with ChatGPT’s approach, which produces a concise summary in the first retelling and makes few changes thereafter. The study found that human retellings become increasingly novel as the process progresses, with approximately 55%-60% new content in each retelling.

These findings suggest that while ChatGPT can serve as a valuable tool in various applications, it cannot fully replicate the richness and variability of human narrative communication.

“The results reveal that spontaneous retelling by humans involves ongoing creativity, anchored by emotions, beyond the default probabilistic wording of large language models such as ChatGPT,” the study authors concluded.

The study sheds light on the differences between how ChatGPT and humans retell stories. However, it should be noted that both humans and ChatGPT retelling depend on the instructions they receive. A tweak of the instructions given to retellers could produce completely different results.

The paper, “Humans create more novelty than ChatGPT when asked to retell a story,” was authored by Fritz Breithaupt, Ege Otenen, Devin R. Wright, John K. Kruschke, Ying Li, and YiyanTan.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Scientists are uncovering more and more unsettling facts about our politics

People with depression face significantly greater social and health-related challenges

Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds

New research reveals hidden biases in AI’s moral advice

7 subtle signs you are being love bombed—and how to slow things down before you get hurt

A simple breathing exercise enhances emotional control, new research suggests

Despite political tensions, belief in an impending U.S. civil war remains low

Girls are better than boys at detecting their own ADHD symptoms

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy