Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

by Eric W. Dolan
April 5, 2026
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent study published in the Journal of Experimental Psychology: General suggests that people consistently judge creative writing more harshly if they believe it was created by artificial intelligence. This bias appears incredibly difficult to overcome, pointing to a persistent human preference for art created by people.

Generative artificial intelligence refers to computer programs capable of producing new text, images, or music by predicting patterns from massive amounts of data. Tools like ChatGPT and Claude can now write essays, poems, and stories that read very much like they were written by a real person. As these technologies become more common, scientists wanted to understand how people react to computer-generated art.

“We started this project in early 2023, shortly after the launch of ChatGPT. From my early interactions with the technology, it was clear to me that this tool was capable of creative production, and I was very curious about whether and how humans would react to AI-produced creative goods,” explained study author Manav Raj, an assistant professor in management at the Wharton School of the University of Pennsylvania.

Prior research hints that people might not be able to tell the difference between human and computer writing if they are kept in the dark. However, the researchers conducted this specific study to see what happens when audiences are explicitly told that a machine wrote the text. They wanted to see if this knowledge changes how people enjoy the art and whether anything can soften that negative reaction.

To explore these questions, the scientists carried out sixteen separate experiments involving a total of 27,491 participants. In the first group of five experiments, researchers tested whether the actual content of the writing changed how people reacted to the artificial intelligence label. They had participants read poems and short stories generated by ChatGPT and rated them on quality, creativity, and enjoyment.

Some participants were told a machine wrote the text, while others were told a human wrote it. The researchers varied the writing style, testing first-person versus third-person perspectives, poetry versus prose, and different emotional tones. They even tested stories featuring human characters versus aliens, animals, and robots.

Across all these variations and thousands of participants, readers consistently gave lower ratings to the text when they thought a machine wrote it. Changing the story details did not consistently lessen this penalty. This initial phase provided evidence that the bias is largely independent of the specific content of the writing.

In the second phase of the research, the scientists conducted an experiment with 3,590 participants to see if the evaluation context mattered. They asked one group to judge the text as a piece of art. They asked another group to judge it based on objective qualities like coherence and logic.

Google News Preferences Add PsyPost to your preferred sources

Changing the instructions in this way did not soften the negative reaction. Participants in both groups still devalued the writing when they believed it came from a computer. This suggests that the bias applies whether people are reading for pleasure or for practical evaluation.

Next, the researchers ran five more experiments to see if changing people’s perceptions of the computer program would help. In these studies, they asked participants to read articles about the impressive cognitive or emotional capabilities of machines before reading the generated stories. In some versions, the scientists also tried humanizing the software by giving it a name and a gender.

None of these strategies reliably reduced the negative bias. Even when the computer program was described as highly capable or given human traits, participants still rated the writing lower upon learning its origin. The negative reaction proved remarkably persistent across these diverse approaches.

“The surprise to us was how persistent the effect was,” Raj told PsyPost. “We really tried at different points to “break” it and to find circumstances where we could get the AI disclosure discount to go away. Despite our attempts that built on existing literature on algorithmic aversion, we found this result was really sticky.”

In a fourth pair of experiments, the scientists explored whether knowing a computer wrote a story simply makes people feel ambivalent. Ambivalence means having mixed feelings, where someone might see both positive and negative qualities in the exact same thing at the exact same time. Testing 423 and 1,280 participants respectively across two studies, the researchers sought to measure this specific emotional state.

They found that knowing about the computer involvement did not create mixed feelings. It simply made the participants’ judgments more negative overall. The disclosure did not create a complex emotional response, but rather a straightforward decrease in appreciation.

Finally, the researchers ran three experiments to test a concept involving a human in the loop. They wanted to know if framing the writing process as a collaboration between a person and a machine would be viewed more favorably. They tested this with machine-generated stories and with actual award-winning short stories written by humans.

When participants were told a person used a computer program as a tool to write the story, they still judged the work just as harshly as if the machine had written it alone.

Throughout the studies, researchers collected data on various potential mechanisms, like perceived humanness, effort, and emotional depth. They consistently found that perceived authenticity was the strongest factor explaining the lowered ratings. People simply view machine-generated text as less authentic than human creations, which explains the negative ratings.

“Our main finding is that, at least at this point, humans have a persistent, negative reaction to knowing that creative goods (or at least creative writing) are produced with the help of AI,” Raj said. “While everything with AI is a moving target right now, this lasted over many, many studies and a roughly two-year period of data collection.”

While these findings provide evidence of a strong bias, there are a few potential limitations to keep in mind. The participants were recruited from an online platform that tends to attract people who are somewhat tech-savvy. This means the results might not perfectly represent the entire global population.

The observed biases could also manifest differently in visual arts, music, or other physical products. It is entirely possible that attitudes will shift as society becomes more accustomed to this technology. Future research could explore whether this negative bias fades over time as machine-generated text becomes an everyday reality.

“One thing I’d note is that our study does not speak to the quality of AI-generated creative goods at all,” Raj explained. “In all cases, we held the writing sample constant and just manipulated whether participants believed it was written by AI. Accordingly, the quality and nature of the creative goods are an open question.”

“This last point is a question that I’d be interested in studying future. While we are using AI for creative purposes and innovation, we do not yet know what it means for the characteristics of creative goods (other than some research that suggests we have a hard time telling apart AI-generated vs. human-generated creative goods in some settings). I’m very interested in pushing further in this domain.”

The study, “The Artificial Intelligence Disclosure Penalty: Humans Persistently Devalue AI-Generated Creative Writing,” was authored by Manav Raj, Justin M. Berg, Rob Seamans.

Previous Post

Psilocybin slows down human reaction times and impairs executive function during the acute phase of use

Next Post

Intelligent people are better judges of the intelligence of others

RELATED

Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026
ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests
Artificial Intelligence

ChatGPT acts as a “cognitive crutch” that weakens memory, new research suggests

March 30, 2026
Russian propaganda campaign used AI to scale output without sacrificing credibility, study finds
Artificial Intelligence

Knowing an AI is involved ruins human trust in social games

March 28, 2026
Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

Most Americans don’t fear an AI apocalypse, according to new research

March 26, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Should your marketing tell a story or state the facts? A massive meta-analysis has answers
  • When brands embrace diversity, some customers pull away — and new research explains why
  • Smaller influencers drive engagement while bigger ones drive purchases, meta-analysis finds
  • Political conservatives are more drawn to baby-faced product designs, and purity values explain why
  • Free gifts with no strings attached can boost customer spending by over 30%, study finds

LATEST

Your breathing pattern is as unique as a fingerprint

Extreme athletes just helped scientists unlock a deep evolutionary secret about human survival

How different negative emotions change the size of your pupils

Artificial intelligence makes consumers more impatient

Stacking bad habits triples the risk of co-occurring anxiety and depression in teenagers

When the pay gap is wide, women see professional beauty as a strategic asset

Scientists discover intriguing brainwave patterns linked to rhythmic sound meditation

Drumming with friends increases oxytocin levels in children, study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc