Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Being honest about using AI can backfire on your credibility

by Oliver Schilke and Martin Reimann
May 29, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Don't miss out! Follow PsyPost on Bluesky!

Whether you’re using AI to write cover letters, grade papers or draft ad campaigns, you might want to think twice about telling others. That simple act of disclosure can make people trust you less, our new peer-reviewed article found.

As researchers who study trust, we see this as a paradox. After all, being honest and transparent usually makes people trust you more. But across 13 experiments involving more than 5,000 participants, we found a consistent pattern: Revealing that you relied on AI undermines how trustworthy you seem.

Participants in our study included students, legal analysts, hiring managers and investors, among others. Interestingly, we found that even evaluators who were tech-savvy were less trusting of people who said they used AI. While having a positive view of technology reduced the effect slightly, it didn’t erase it.

Why would being open and transparent about using AI make people trust you less? One reason is that people still expect human effort in writing, thinking and innovating. When AI steps into that role and you highlight it, your work looks less legitimate.

But there’s a caveat: If you’re using AI on the job, the cover-up may be worse than the crime. We found that quietly using AI can trigger the steepest decline in trust if others uncover it later. So being upfront may ultimately be a better policy.

Why it matters

A global survey of 13,000 people found that about half had used AI at work, often for tasks such as writing emails or analyzing data. People typically assume that being open about using these tools is the right choice.

Yet our research suggests doing so may backfire. This creates a dilemma for those who value honesty but also need to rely on trust to maintain strong relationships with clients and colleagues. In fields where credibility is essential – such as finance, health care and higher education – even a small loss of trust can damage a career or brand.

The consequences go beyond individual reputations. Trust is often called the social “glue” that holds society together. It drives collaboration, boosts morale and keeps customers loyal. When that trust is shaken, entire organizations can feel the effects through lower productivity, reduced motivation and weakened team cohesion.

If disclosing AI use sparks suspicion, users face a difficult choice: embrace transparency and risk a backlash, or stay silent and risk being exposed later – an outcome our findings suggest erodes trust even more.

That’s why understanding the AI transparency dilemma is so important. Whether you’re a manager rolling out new technology or an artist deciding whether to credit AI in your portfolio, the stakes are rising.

What still isn’t known

It’s unclear whether this transparency penalty will fade over time. As AI becomes more widespread – and potentially more reliable – disclosing its use may eventually seem less suspect.

There’s also no consensus on how organizations should handle AI disclosure. One option is to make transparency completely voluntary, which leaves the decision to disclose to the individual. Another is a mandatory disclosure policy across the board. Our research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors.

A third approach is cultural: building a workplace where AI use is seen as normal, accepted and legitimate. We think this kind of environment could soften the trust penalty and support both transparency and credibility.

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TweetSendScanShareSendPinShareShareShareShareShare

RELATED

Trump’s speeches stump AI: Study reveals ChatGPT’s struggle with metaphors
Artificial Intelligence

Trump’s speeches stump AI: Study reveals ChatGPT’s struggle with metaphors

July 15, 2025

Can an AI understand a political metaphor? Researchers pitted ChatGPT against the speeches of Donald Trump to find out. The model showed moderate success in detection but ultimately struggled with context, highlighting the current limits of automated language analysis.

Read moreDetails
Daughters who feel more attractive report stronger, more protective bonds with their fathers
Artificial Intelligence

People who use AI may pay a social price, according to new psychology research

July 14, 2025

Worried that using AI tools like ChatGPT at work makes you look lazy? New research suggests you might be right. A study finds employees who use AI are often judged more harshly, facing negative perceptions about their competence and effort.

Read moreDetails
Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Scientists identify the brain’s built-in brake for binge drinking

Trump’s speeches stump AI: Study reveals ChatGPT’s struggle with metaphors

Childhood maltreatment linked to emotion regulation difficulties and teen mental health problems

Caffeine may help prevent depression-like symptoms by protecting the gut-brain connection

Secret changes to major U.S. health datasets raise alarms

Moral outrage spreads petitions online—but doesn’t always inspire people to sign them

The triglyceride-glucose index: Can it predict depression risk in the elderly?

People with ADHD exhibit altered brain activity before making high-stakes choices

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy