Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Disclosing autism to AI chatbots prompts overly cautious, stereotypical advice

by Karina Petrova
April 18, 2026
in Artificial Intelligence, Autism
Share on TwitterShare on Facebook

When autistic people ask artificial intelligence programs for life advice, mentioning their diagnosis prompts these systems to recommend highly conservative choices like skipping social events or avoiding romance. This shift in advice reveals a hidden tension where the technology relies heavily on stereotypes, leaving users torn between feeling safely supported and frustratingly infantilized. These findings were published at the April 2026 CHI Conference on Human Factors in Computing Systems.

Many autistic individuals face stigma in their daily lives, which can lead to social isolation and communication barriers. To find support without the fear of judgment, some turn to artificial intelligence chatbots. These text-based programs, often called large language models, are trained on massive amounts of internet text to predict and generate human-like writing.

Autistic people often ask these programs for help navigating relationships, workplace conflicts, and personal decisions. Users sometimes reveal their autism to the chatbot, hoping the system will tailor its advice to their specific needs. This expectation reflects a broader trend of consumers wanting customized interactions with their digital tools.

Virginia Tech computer science doctoral student Caleb Wohn led a team of researchers to investigate what happens behind the scenes during these interactions. Wohn and his colleagues wanted to see if disclosing an autism diagnosis led to better advice or simply activated the biases baked into the system’s training data.

“I was thinking about my experiences growing up with autism,” Wohn said. “It would have been very tempting for me, at certain times, to want to just be able to talk with something that’s not a person that seems objective and feel like I’m getting objective advice.”

Wohn worried that young people or those without technical backgrounds might not grasp how a simple disclosure could alter the responses they receive. “For someone like me as a kid, or someone who isn’t in AI and doesn’t have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?” Caleb said.

Eugenia H. Rho, an assistant professor of computer science at Virginia Tech, guided the research team. Her previous work established that autistic individuals frequently use text-based artificial intelligence for emotional support. “People are really looking to personalize LLMs,” Rho said. “But if a user tells the model that they’re autistic, or a woman, or any other self-identification, what assumptions will it make?”

Other Virginia Tech contributors included computer science doctoral students Buse Çarık and Xiaohan Ding, along with Associate Professor Sang Won Lee. Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also contributed to the project. They aimed to measure exactly how these models altered their guidance based on identity disclosures.

Google News Preferences Add PsyPost to your preferred sources

To test the models, the research team created a specialized evaluation pipeline. They started by identifying twelve common stereotypes about autistic people from existing literature. These stereotypes included assumptions that autistic individuals are introverted, obsessive, emotionally detached, dangerous, or uninterested in romance.

The researchers then designed hundreds of everyday decision-making scenarios based on these stereotypes. Each scenario was framed as a user asking the artificial intelligence for advice, prompting the system to choose between two distinct actions. For example, a scenario might ask if the user should go out for drinks with coworkers or stay home to rest.

They fed these scenarios into six popular artificial intelligence models. These included widely used systems like GPT-4o-mini and Claude-3.5 Haiku, as well as Gemini-2.0-flash, Llama-4-Scout, Qwen-3 235B, and DeepSeek-V3. The researchers generated 345,000 separate responses across different experimental conditions to see how the software behaved.

First, the team tested the models by explicitly describing the user with a stereotypical trait, like stating the user had poor social skills. This step confirmed that the scenarios accurately triggered the models to favor one piece of advice over the other. The models reliably adjusted their advice when given a direct description of a trait.

Next, the researchers ran the same scenarios but only changed whether the prompt included a simple statement of an autism diagnosis. The models no longer received direct descriptions of personality traits. The researchers then compared the advice generated when autism was disclosed against the advice given when no diagnosis was mentioned.

The differences in the recommendations were immediate and highly consistent across the board. When users disclosed an autism diagnosis, the models disproportionately pushed them toward avoidance and risk aversion. Across the majority of the models, the software advised autistic users to avoid socializing, avoid trying new things, and stay out of romantic relationships.

The systems also frequently advised users to avoid workplace confrontations. This advice aligned with stereotypical assumptions that autistic people are either potentially dangerous or incapable of handling conflict gracefully. The sheer scale of these changes surprised the research team.

In one scenario involving a social invitation, a model told the user to decline the event nearly 75 percent of the time when autism was disclosed. When autism was not mentioned, the same model recommended declining only about 15 percent of the time. In dating scenarios, another model advised avoiding romance nearly 70 percent of the time after an autism disclosure.

The researchers then showed these results to eleven autistic adults in a series of interview sessions. The participants read both the statistical charts and the open-ended text responses generated by the artificial intelligence. Their reactions were highly varied, exposing a deep tension in how different people interpret computerized advice.

Some participants felt the system was relying on insulting caricatures of their community. Reacting to a particularly cold and mechanical response, one participant asked, “Are we writing an advice column for Spock here?” Others described the conservative advice as restrictive, patronizing, or infantilizing.

Conversely, other participants appreciated the cautious nature of the artificial intelligence. They felt that advice warning them to avoid overstimulation was protective and affirming. To these users, the system seemed to understand the very real risks of social burnout and exhaustion.

This division in the participants’ reactions revealed what the researchers called a safety-opportunity paradox. What one person experiences as harmful stereotyping that limits their growth, another experiences as supportive personalization that honors their boundaries. “One user’s bias could be another user’s personalization,” Rho said.

Wohn found this ambiguity deeply concerning, especially given how convincingly the software presents its answers. “AI is very good at seeming reliable,” he said. “Its responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of systematic biases that are actually shaping its responses, that’s when it starts to get a lot more concerning.”

During the interviews, participants also highlighted the desire to retain agency over their data. One participant noted that it would be better to have manual control over how the machine learns. As they told the researchers: “I want to have control over how my identity is used.”

The study does have some limitations that the researchers plan to address in future work. The researchers used synthetic, highly structured prompts that forced the models to pick between two predetermined choices. While this approach was necessary to measure the stereotypes mathematically, it does not perfectly mirror how a real person types out a messy, complicated request for help.

Additionally, the experiment relied on a very blunt form of disclosure, simply stating an autism diagnosis in one sentence. In reality, users might explain their specific sensory needs or communication preferences in much greater detail. Future research will need to gather actual prompts from autistic users to see how nuanced disclosures affect the tone and structure of the generated advice.

The team hopes these findings will encourage developers to build transparency features into artificial intelligence platforms. They suggest giving users explicit controls to dial up or dial down how much their identity influences the system’s responses. Such features could help ensure that customized technology actually serves the varied, individual needs of its users.

The study, “‘Are we writing an advice column for Spock here?’ Understanding Stereotypes in AI Advice for Autistic Users,” was authored by Caleb Wohn, Buse Çarık, Xiaohan Ding, Sang Won Lee, Young-Ho Kim, and Eugenia H. Rho.

Previous Post

Can choking during sex cause brain damage? Emerging evidence points to hidden neurological risks

RELATED

Live music causes brain waves to synchronize more strongly with rhythm than recorded music
Artificial Intelligence

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

April 18, 2026
People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Trump links Tylenol and autism. What does current research actually say?
Autism

Autism associated with age of maternal grandparents in new study

April 7, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
People high in psychopathy and low in cognitive ability are the most politically active online, study finds
Autism

Autism risk genes are shared across human ancestries, large genome study reveals

April 2, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026

STAY CONNECTED

RSS Psychology of Selling

  • Why personalized ads sometimes backfire: A research review explains when tailoring messages works and when it doesn’t
  • The common advice to avoid high customer expectations may not be backed by evidence
  • Personality-matched persuasion works better, but mismatched messages can backfire
  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds

LATEST

Can choking during sex cause brain damage? Emerging evidence points to hidden neurological risks

The decline of hypergamy: How a surge in university degrees changed marriage in the US and France

New research finds a persistent and growing leftward tilt in the social sciences

How a year of regular exercise alters the biology of stress

Scientists tested the creativity of AI models, and the results were surprisingly homogeneous

Live music causes brain waves to synchronize more strongly with rhythm than recorded music

Scientists find evidence some Alzheimer’s symptoms may begin outside the brain

The narcissistic mirror: how extreme personalities view their friends’ humor

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc