Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Managers who use AI to write emails seen as less sincere, caring, and confident

by Eric W. Dolan
September 24, 2025
in Artificial Intelligence, Business
Reading Time: 5 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

Many professionals now use artificial intelligence tools to assist with writing, but a new study suggests that managers who use AI to craft routine workplace emails risk appearing less trustworthy. While AI-assisted messages were generally seen as polished and professional, managers who relied heavily on such tools were viewed as less sincere, caring, and competent by their employees. The findings were published in the International Journal of Business Communication. The study provides evidence that although AI-generated messages are often seen as effective and efficient, they may come at a social cost.

The release of generative artificial intelligence tools like ChatGPT sparked a surge of interest in their use for everyday writing tasks, including those in professional settings. Many workers now rely on these tools to draft emails, reports, or internal memos. Research has already shown that AI-assisted writing can enhance the clarity, correctness, and professionalism of workplace messages. But less is known about how senders of such messages are perceived.

The goal of the new study was to examine not the writing itself, but how readers interpret the character of someone who uses AI to compose a message. In other words, does using AI affect how trustworthy, sincere, or competent the writer appears? And does the answer change depending on whether the message was mostly written by AI or lightly assisted?

The research also aimed to explore how these perceptions shift depending on who is using the AI. Are people more forgiving of their own use of AI than they are of others? Do they judge managers differently than peers?

“I believe AI will significantly impact our interpersonal relationships. People will use AI a lot to assist with communication. This already happens in the workplace. I’d like people to be aware of the impact of AI-mediated communication,” said study author Peter Cardon, the Warren Bennis Chair in Teaching Excellence and professor of business communication at the University of Southern California.

The research team surveyed 1,158 full-time working professionals in the United States, each of whom spent at least half of their work time on a computer. Participants were randomly shown one of eight different scenarios describing an email message that congratulated a team on reaching its goals. The scenarios varied along two dimensions: who the message was from (either the participant or their supervisor) and how much of the message was generated by AI (ranging from low to high assistance).

Some messages showed just light editing by AI, while others had been mostly written by an AI tool based on a short prompt. In some cases, the original prompt given to the AI was shown to participants; in others, it was not. After reading their assigned message, participants were asked a series of questions about the perceived authorship, effectiveness, professionalism, sincerity, caring, confidence, and comfort level with the use of AI.

The survey included both numerical rating scales and an open-ended question asking participants to explain why they thought authorship did or did not matter in workplace communication.

Google News Preferences Add PsyPost to your preferred sources

Overall, the results indicated that while people viewed AI-assisted messages as generally professional and effective, they were less likely to trust the sender—especially when that sender was a supervisor using a high level of AI assistance.

In particular, participants were less likely to believe that supervisors were the true authors of messages heavily assisted by AI. While 93 percent agreed that a supervisor was the author in the low-assistance condition, only 25 percent agreed in the high-assistance condition without a visible prompt.

Despite this, heavily AI-assisted messages were not rated as less effective. In fact, messages with high AI involvement were sometimes seen as slightly more effective than those with less assistance. Participants often described AI as a useful tool for improving grammar, tone, and structure. Many said they didn’t mind if AI was used to polish writing, as long as the content still reflected the sender’s own ideas.

“Minor use of AI, primarily for making small edits to professional emails, is generally considered appropriate,” Cardon told PsyPost.

Still, there was a clear tension between message quality and perceptions of the sender. Supervisors who relied heavily on AI were consistently rated as less sincere, caring, and confident. Only about 40 percent of participants considered supervisors in the high-assistance conditions to be sincere, compared to over 80 percent in the low-assistance conditions.

“The biggest surprise was the intensity of feelings,” Cardon said. “Many respondents expressed indignation about bosses using AI for emails.”

The open-ended responses revealed several reasons behind this skepticism. Many participants expressed a sense of disappointment or frustration when learning that a message—especially a congratulatory one—had been largely written by AI. Some described it as “lazy,” “insincere,” or “dishonest.” Others said it felt like the manager didn’t care enough to write a personal message. This lack of effort was perceived by some as a lack of investment in the team’s success.

Some participants also questioned the competence of supervisors who relied heavily on AI. A number of respondents said they would expect managers to be capable of writing a simple email without outside help, and using AI for this purpose might signal a lack of leadership or communication skills.

The results also showed a significant perception gap between how participants viewed their own use of AI and how they judged others, particularly their supervisors. People tended to evaluate their own AI-assisted writing more favorably than that of their boss. When they imagined themselves using AI, they were more likely to see it as a helpful support tool. But when supervisors used it, especially without much transparency, the use was more likely to raise doubts about sincerity and trustworthiness.

Despite these concerns, most participants said they were generally comfortable with AI being used for this type of message. Even in the high-assistance conditions, a majority said they had no problem with supervisors using AI to write a congratulatory email. However, their comfort often came with caveats. Many participants emphasized that the acceptability of AI use depends on the nature of the message. Messages that are relational or emotional in tone, such as praise or support, were viewed as less appropriate for AI generation than factual updates or routine reminders.

Several respondents also raised longer-term concerns about the repeated use of AI in workplace communication. Some worried that overuse could lead to a loss of human connection or undermine team cohesion. Others feared that if AI becomes the default for all types of messaging, even interpersonal ones, the workplace could begin to feel impersonal or transactional.

“Professionals should be aware of the reputational and relational risks of overusing AI in business communication,” Cardon advised.

As with all research, there are limitations. The study focused on a specific type of message—an email congratulating a team—which may not generalize to all workplace communication. Responses may have differed if the message was about conflict resolution, feedback, or performance reviews. Future research could explore how perceptions vary across different genres of communication and different professional contexts.

The study also centered on the supervisor-subordinate relationship, where power dynamics may heighten concerns about sincerity and trust. Perceptions might differ in peer-to-peer scenarios, or when subordinates use AI to communicate upward.

“We’re at the early stages of mass AI use,” Cardon noted. “The tools will continue to evolve and people’s attitudes may change too.”

The researchers recommend additional studies on whether people feel that AI use should be disclosed, and how that disclosure might affect trust. They also suggest exploring how attitudes toward AI-assisted writing change over time as such tools become more embedded in everyday work life.

“We want to accurately represent people’s views, attitudes, and experiences as AI becomes more embedded in daily communication,” Cardon explained. “We hope this information empowers individuals to use AI in ways that improve their lives and their relationships. We’re all on an AI journey now. We should discuss it and use it thoughtfully and with purpose.”

The study, “Professionalism and Trustworthiness in AI Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI,” was authored by Peter W. Cardon and Anthony W. Coman.

Previous Post

Even light alcohol drinking raises dementia risk, according to largest genetic study to date

Next Post

Texting abbreviations come with a hidden social penalty, according to new psychology research

RELATED

Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Unrestricted generative AI harms high school math learning by acting as a crutch

April 21, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

People remain “blissfully ignorant” of AI use in everyday messages, new research shows

April 20, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

April 18, 2026
Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
Republican lawmakers lead the trend of using insults to chase media attention instead of policy wins
Business

Children with obesity face a steep decline in adult economic mobility

April 16, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Weird disconnect between gender stereotypes and leader preferences revealed by new psychology research
Business

When the pay gap is wide, women see professional beauty as a strategic asset

April 11, 2026

STAY CONNECTED

RSS Psychology of Selling

  • A new framework maps how influencers, brands, and platforms all compete for long-term value
  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age

LATEST

Short video addiction is linked to lower life satisfaction through loneliness and anxiety

Unrestricted generative AI harms high school math learning by acting as a crutch

Lifting weights builds a sharper mind and reduces anxiety in older women

How a perceived lack of traditional values makes minorities seem younger

Does listening to true crime make you a more creative criminal?

Autism spectrum disorder is associated with specific congenital malformations

Study links internalized pornographic standards to body image issues among incel men

Listening to bad music makes you crave sugar, study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc