Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI chatbots often violate ethical standards in mental health contexts

by Karina Petrova
October 26, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.

The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.

To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.

The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.

The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.

Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.

The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.

Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.

The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.

The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.

For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.

In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.

The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

RELATED

Mind captioning: This scientist just used AI to translate brain activity into text
Artificial Intelligence

Artificial intelligence exhibits human-like cognitive errors in medical reasoning

November 10, 2025
Mind captioning: This scientist just used AI to translate brain activity into text
Artificial Intelligence

Mind captioning: This scientist just used AI to translate brain activity into text

November 10, 2025
Shyness linked to spontaneous activity in the brain’s cerebellum
Artificial Intelligence

AI roots out three key predictors of terrorism support

November 6, 2025
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Smarter AI models show more selfish behavior

November 4, 2025
In neuroscience breakthrough, scientists identify key component of how exercise triggers neurogenesis
Artificial Intelligence

Brain-mimicking artificial neuron could solve AI’s growing energy problem

November 1, 2025
In neuroscience breakthrough, scientists identify key component of how exercise triggers neurogenesis
Artificial Intelligence

Google’s AI co-scientist just solved a biological mystery that took humans a decade

November 1, 2025
Scientists discover unique neuron density patterns in children with autism
Artificial Intelligence

The secret to sustainable AI may have been in our brains all along

October 31, 2025
Young children are more likely to trust information from robots over humans
Artificial Intelligence

New study shows that a robot’s feedback can shape human relationships

October 30, 2025

PsyPost Merch

STAY CONNECTED

LATEST

Artificial intelligence exhibits human-like cognitive errors in medical reasoning

A multi-scale view of the brain uncovers the blueprint of intelligence

Cognitive disability might be on the rise in the U.S., particularly among younger adults

For individuals with depressive symptoms, birdsong may offer unique physiological benefits

Mind captioning: This scientist just used AI to translate brain activity into text

Brain imaging study reveals how different parts of the brain “fall asleep” at different times

Mehmet Oz’s provocative rhetoric served as a costly signal, new study suggests

A neuroscientist explains how to build cognitive reserve for a healthier brain

RSS Psychology of Selling

  • How supervisors influence front-line salespeople
  • Age shapes how brains respond to guilt-based deceptive advertising
  • Is emotional intelligence the hidden ingredient in startup success?
  • Which videos make Gen Z shoppers click “buy now”?
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy