Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

by Eric W. Dolan
April 20, 2026
in Artificial Intelligence, Social Psychology
Share on TwitterShare on Facebook

A recent study published in Computers in Human Behavior has found that people evaluate others harshly when they know a message was written using artificial intelligence. Yet, individuals tend to remain completely unaware of potential artificial intelligence use in everyday situations. When left in the dark about how a message was created, recipients assume a human wrote it and form positive impressions of the sender.

Generative artificial intelligence refers to computer programs that can produce realistic, human-like text based on simple user instructions. People increasingly use these tools (such as Claude, ChatGPT, and Gemini) to draft emails, social media posts, and text messages. Scientists Jiaqi Zhu and Andras Molnar wanted to explore how relying on these programs affects how we view one another in daily life.

Usually, writing a thoughtful message requires time and mental energy. These efforts signal a sender’s sincerity and investment in a relationship. Because text-generating programs remove this effort, the researchers wanted to know if the availability of these tools makes people more suspicious of the messages they receive.

Past studies have shown that people judge communicators negatively when they know a message was generated by artificial intelligence. However, in the real world, people rarely admit that they used a computer program to write their emails. Zhu and Molnar conducted their research to see how people form impressions in realistic situations where artificial intelligence use is kept secret or remains uncertain.

“In academic settings, discussion of generative AI has become unavoidable since ChatGPT’s release in late 2022. For most instructors, detection and regulation of AI use are now part of the job, and in this climate, it’s easy for vigilance to slide into full-on paranoia. Some instructors may even become overzealous, reading AI into writing that may be entirely human, as evidenced by the growing number of high-profile lawsuits against colleges over students who were failed or expelled based on suspected AI use,” said study author Andras Molnar, an assistant professor of psychology at the University of Michigan.

“But in my conversations with people outside academia, I realized we might be living in a bubble: what feels routine in academia may not reflect how people think elsewhere. That’s what motivated our study: we wanted to understand whether people suspect AI use in everyday contexts like emails, text messages, and social media profiles.”

To investigate these questions, Zhu and Molnar conducted a pair of online experiments. In the first experiment, the researchers recruited 647 adults in the United States and asked them to read a hypothetical email. The participants were randomly assigned to read one of four types of messages. These included a gratitude email from a friend, a job application from a nanny, a cover letter from a data analyst, or project feedback from a colleague.

The scientists divided the participants into four groups, giving each group different information about how the email was written. One group was told the sender wrote the message entirely on their own. Another group was told the sender used an artificial intelligence chatbot to write the exact text.

Google News Preferences Add PsyPost to your preferred sources

A third group was told they could not be certain whether the message was human-written or generated by artificial intelligence. The final group received no information about the source of the message. This last group mimics how we usually receive emails in real life.

After reading the email, participants rated their social impression of the sender based on ten personal traits. These traits included friendliness, sincerity, authenticity, and trustworthiness. The researchers found that participants evaluated the sender much more negatively when they knew artificial intelligence was used to write the message.

This finding confirms that an explicit disclosure of artificial intelligence use damages a person’s social reputation. The researchers also analyzed the words participants used to describe their first impressions of the sender. When artificial intelligence was disclosed, participants used fewer positive words and more negative words to describe the sender.

Yet, when participants received no information about how the message was created, they evaluated the sender just as positively as when they knew a human wrote it. The scientists noted that participants in this group showed no natural suspicion. Even in the uncertain group, where the possibility of computer assistance was highlighted, participants formed impressions that were much closer to the human-written group than to the artificial intelligence group.

“In these ordinary, everyday interactions, people really dislike receiving AI-generated messages from others,” Molnar told PsyPost. “For example, we don’t want AI-generated apologies, no matter how polished they are, because they sound inauthentic and hollow; outsourcing deeply personal communication to AI may even feel like a betrayal and signal disrespect.”

“However, this ‘AI penalty’ seems to apply only when we know or strongly suspect that someone used AI to write the message. What our work shows is that without explicit disclosure (for example, a label indicating AI use), people generally don’t suspect AI in everyday situations and treat these messages as if they were fully human-written.”

The researchers conducted a second experiment seven months later to see if rising public familiarity with these text-generating programs would increase natural skepticism. They recruited a new sample of 654 adults in the United States. This time, they updated the scenarios to include a wider variety of communication styles. The new scenarios featured a social media post about a summer internship, text messages apologizing for a canceled dinner, and a detailed online dating profile.

In this second experiment, the scientists asked participants to estimate how much time and mental effort the sender put into the message. The researchers also asked how accurately the text reflected the sender’s true feelings. Participants who were told the text was generated by a computer program gave lower ratings on all three of these measures.

For the group that received no information about the source of the message, participants assumed the sender invested the same amount of mental effort as a confirmed human writer. The researchers found that the lack of mental effort and reflection accuracy completely explained why participants penalized the artificial intelligence users. The results of the second experiment fully replicated the findings of the first study, showing that people remain blissfully ignorant of artificial intelligence use.

“What surprised us most was that people who themselves are heavy users of generative AI (who frequently send AI-generated or AI-edited messages) were not any more likely to suspect that others were using AI,” Molnar said. “We expected that more experience with these tools would make people more skeptical, but it didn’t. In other words, familiarity with AI doesn’t automatically translate into greater suspicion in everyday communication.”

“This finding matters because it suggests that people can outsource their writing to AI with relatively little risk of being detected, or even suspected. This creates an uneven playing field: people who don’t want to use AI, or can’t use it, may be at a disadvantage, while heavy users can come across as more articulate, polished, and effective without incurring negative perceptions — unless they admit that they used AI. And why would they?”

When discussing their findings, the scientists highlighted a potential misinterpretation regarding what the participants were actually evaluating. Molnar explained that the study was designed to measure how people judge the author of a message, not how they judge the quality or effectiveness of the text itself. The focus was entirely on the social impression formed about the person behind the screen.

The study also has a few limitations that provide avenues for future research. The experiments relied on hypothetical scenarios, which means participants might react differently in real-life situations with actual stakes. The researchers also tested a complete use of artificial intelligence rather than a partial use, where a person might simply use a program to edit a few sentences.

Because the research focused on one-way communication, it is unknown how people might react during a live, back-and-forth conversation. Additionally, the study only included participants from the United States. The researchers are particularly interested in exploring what specific situations trigger suspicion in everyday life.

“Our next step is to understand what triggers vigilance and suspicion: what flips the switch between everyday communication and contexts like academia, where people are much more aware of possible AI use? Our current studies already suggest it’s not simply a matter of exposure or familiarity with these tools, since even heavy AI users aren’t more likely to suspect others,” Molnar said.

“So we’re now testing other explanations: for example, whether high-stakes situations (grades, hiring, evaluations) reliably increase vigilance, and whether people become more skeptical only after personally relevant negative experiences that teach them to watch for AI use. I would also love to collect data in other countries (our current experiments were conducted in the US) to see if there are any differences in skepticism and vigilance.”

The study, “Blissful (A)Ignorance: Despite the widespread adoption of AI in communication, people do not suspect AI use in realistic contexts,” was authored by Jiaqi Zhu and Andras Molnar.

Previous Post

Believing in a “chemical imbalance” might keep patients on antidepressants longer

RELATED

Collective narcissism, paranoia, and distrust in science predict climate change conspiracy beliefs
Conspiracy Theories

New study reveals how political bias conditions the impact of conspiracy thinking

April 19, 2026
Women’s cognitive abilities remain stable across menstrual cycle
Cognitive Science

Men and women show different relative cognitive strengths across their lifespans

April 19, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Dating

The decline of hypergamy: How a surge in university degrees changed marriage in the US and France

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Political Psychology

New research finds a persistent and growing leftward tilt in the social sciences

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
New study links narcissism and sadism to heightened sex drive and porn use
Narcissism

The narcissistic mirror: how extreme personalities view their friends’ humor

April 17, 2026
Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins
Business

Children with obesity face a steep decline in adult economic mobility

April 16, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Believing in a “chemical imbalance” might keep patients on antidepressants longer

Can a common parasite medication calm the brain’s stress circuitry during alcohol withdrawal?

Childhood trauma and attachment styles show nuanced links to alternative sexual preferences

New study reveals how political bias conditions the impact of conspiracy thinking

Cognition might emerge from embodied “grip” with the world rather than abstract mental processes

Men and women show different relative cognitive strengths across their lifespans

Early exposure to forever chemicals linked to altered brain genes and impulsive behavior in rats

Soft brain implants outperform rigid silicon in long-term safety study

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc