A new study published in the Journal of Personality and Social Psychology suggests that simply believing a piece of creative work was made by artificial intelligence—rather than a human—can make people feel more confident in their own creative abilities. Across multiple experiments involving jokes, poetry, visual art, and storytelling, people consistently felt more capable when they thought they were comparing themselves to an AI creator.
Generative artificial intelligence refers to computer systems that can produce text, images, and other forms of media in response to prompts, mimicking human creativity. Tools like ChatGPT, Midjourney, and others are now widely accessible and regularly used to create jokes, artwork, stories, and summaries. As people interact more with this kind of content online, the researchers set out to understand how these interactions might shape people’s self-perceptions—especially their creative self-confidence, which is a person’s belief in their own creative potential.
“As generative artificial intelligence (gen-AI) becomes more widespread, it is increasingly important to understand how people psychologically respond to the content it produces,” said study author Taly Reich, an associate professor of marketing at NYU Stern School of Business
The researchers designed a series of preregistered experiments to explore whether exposure to AI-generated content changes how people view their own abilities. Their guiding theory drew from classic ideas in psychology about social comparison. People often evaluate themselves by comparing their abilities to those of others. If someone views a peer’s work and believes the peer is more skilled, their own confidence might go down. But if the comparison target seems less capable, their confidence may go up. The researchers wanted to know where generative AI falls on this spectrum: Do people view AI as a higher or lower creative standard?
To explore this, researchers conducted seven experiments involving a total of 6,801 participants from the United States and the United Kingdom. In each study, participants were shown the same creative work, but it was randomly labeled as being produced either by a generative AI system or by a fellow participant. The studies spanned different creative domains, including humor, poetry, drawing, storytelling, and caption writing.
Participants were then asked to evaluate their own creative abilities and how capable they thought the content’s author was. Across studies, the researchers also explored whether this boost in confidence led to behavioral outcomes—such as a greater willingness to create content—and tested whether the effect would still occur with high- or low-quality content or in non-creative domains like factual writing.
In Studies 1A, 1B, and 1C, participants were exposed to jokes, visual art, and poetry, respectively. In each case, the content was exactly the same, but half of the participants were told it came from AI, while the other half were told it came from another person. After reading or viewing the content, participants were asked how confident they were in their ability to produce something better. Those who believed the work came from AI consistently rated themselves as more capable. This effect was strongest when participants also believed the AI was less talented in that domain, suggesting that people use the perceived lower ability of AI as a way to boost their own self-assessments.
Study 2 focused on storytelling. Participants read a short story that was either labeled as AI- or human-generated and then were given the option to write their own story using a prompt. Those who believed the original story came from AI were significantly more likely to say they wanted to try writing a story themselves. The researchers found that the increase in creative self-confidence translated into a greater willingness to engage in the task.
Study 3 tested whether this newfound self-confidence was warranted. Participants read a cartoon caption they believed was written by either AI or a human, then wrote their own caption. Although those who read the AI-attributed caption reported higher confidence and rated their own captions more positively, external judges found no difference in the actual quality of their work. This suggests that the confidence boost may not always be justified.
In Study 4, the researchers manipulated the quality of the creative content itself. Regardless of whether participants were shown a low-quality or high-quality caption, they still felt more confident in their abilities when they believed the caption came from AI. This indicates that people’s perceptions of the author’s general ability—not just the quality of the specific work—played a stronger role in shaping their self-confidence.
Study 5 tested whether the effect was limited to creative domains. Participants were shown either a creative story or a factual explanation about rain, again labeled as either AI- or human-generated. As before, participants felt more confident after seeing AI-labeled creative content. But this effect disappeared in the factual domain. People judged AI and human authors equally when it came to fact-based writing, and their own confidence remained unchanged regardless of the label.
Together, the studies show that people tend to view AI as a less capable social comparison point when it comes to creative work. This makes them feel more confident in their own creative abilities, even when the content they viewed was identical to what they would have seen had it been attributed to a human. The researchers argue that this is a form of downward social comparison, where people compare themselves to a perceived lower-performing “other” to feel better about their own skills.
“Can exposure to generative AI content reshape people’s self-views? This work finds that when people are exposed to the exact same creative content but believe that it was created by generative AI (vs. another person), they have greater confidence in their own creative abilities,” Reich told PsyPost. “This can lead people to be more likely to attempt a creative activity, even if they don’t have the objective ability underlying their newfound creative self-confidence.”
This boost in self-confidence could have both benefits and drawbacks, the researchers said. On the one hand, it might help people overcome hesitation and take creative risks they might otherwise avoid. For example, students struggling to begin a writing assignment might feel more confident after seeing an AI-generated example. On the other hand, if the confidence boost is not grounded in actual ability, it could lead to overconfidence and poor performance.
“For companies and educators looking to bolster the creative self-confidence of their members, exposure to generative AI-labeled work can help bolster those self-perceptions,” Reich said. “This can also be useful if people are simply stuck in how to proceed with a task; asking generative AI to create something can help inspire psychological confidence to do it yourself.”
The researchers also explored whether perceptions of AI’s creative ability could change over time. In follow-up studies, they found that people’s self-confidence was affected by whether the AI was described as having emotional depth or authenticity—traits that are commonly associated with human creativity. This suggests that people may be willing to update their beliefs about AI’s creative potential, which could in turn shift how comparisons to AI influence self-perception.
One strength of the research is its use of preregistered designs for most of the studies, which helps reduce researcher bias and strengthens the credibility of the findings. The large overall sample size and replication of key results across multiple independent studies also support the reliability and generalizability of the conclusions within Western contexts.
But there are still some limitations to consider. The experimental settings were intentionally minimalistic, designed to isolate specific psychological processes. As a result, they may not fully capture how people engage with AI-generated content in everyday life. The samples were also drawn exclusively from Western countries, so the findings may not apply universally. Finally, most of the conclusions rely on mediation analysis, which infers the psychological process from statistical patterns rather than direct observation.
The study, “Does Artificial Intelligence Cause Artificial Confidence? Generative Artificial Intelligence as an Emerging Social Referent,” was authored by Taly Reich and Jacob D. Teeny.