A new national survey indicates that a significant number of American adolescents and young adults are turning to artificial intelligence programs for support with their emotional well-being. The findings suggest that these digital tools are becoming a common resource for young people navigating feelings of distress, despite the lack of established safety standards for such technology. The study was published in JAMA Network Open.
The emergence of generative artificial intelligence has altered how individuals access information and interact with technology. Programs like ChatGPT and Google Gemini offer immediate responses to complex queries, leading to widespread adoption across various age groups.
While this technology has grown in popularity, the United States is simultaneously facing a severe decline in youth mental wellness. Statistics indicate that nearly one in five adolescents experienced a major depressive episode within the past year.
A substantial portion of these young individuals do not receive professional mental health care. Barriers such as high costs, limited availability of providers, and logistical challenges often prevent access to traditional therapy.
In this context, artificial intelligence offers an alternative that is accessible, affordable, and private. Until now, there has been little empirical evidence to quantify how often young people substitute or supplement professional care with advice from chatbots.
The researchers aimed to fill this knowledge gap by establishing baseline estimates of artificial intelligence usage for mental health purposes. They sought to determine the prevalence of this behavior among a nationally representative sample.
“There has been considerable discussion about the potential for artificial intelligence to provide emotional support to both adults and children. However, there is limited, nationally representative data on how many adolescents self-report using artificial intelligence when they feel sad, angry, or nervous for mental health advice,” said study author Jonathan Cantor, a senior policy researcher at RAND.
The researchers designed a cross-sectional survey targeting youths aged 12 to 21 years. The data collection took place between February and March 2025. The researchers utilized the American Life Panel and Ipsos’ KnowledgePanel to recruit participants. These panels use probability-based sampling methods to ensure the group accurately reflects the broader population of United States households.
The final sample consisted of 1,058 respondents out of more than 2,000 individuals contacted. This group included a diverse mix of backgrounds, with 51 percent identifying as White, 25 percent as Hispanic, and 13 percent as Black. The researchers weighted the survey data to produce statistics that generalize to the population of English-speaking U.S. youths with internet access.
The survey asked participants if they had ever used generative artificial intelligence tools. Specific examples provided to the respondents included ChatGPT, Gemini, and My AI. To ensure that even the youngest participants understood the questions, the researchers avoided clinical terminology. Instead, they asked if respondents had utilized these tools for advice or help when feeling “sad, angry, or nervous.”
The analysis revealed that approximately 13.1 percent of the respondents had used generative artificial intelligence for mental health advice. When extrapolated to the national population, this suggests that roughly 5.4 million adolescents and young adults have sought emotional support from a chatbot.
“The main surprise was the percentage of adolescents that use artificial intelligence when they feel sad, angry, or nervous,” Cantor told PsyPost.
The data showed a clear distinction in usage rates based on age. Among adolescents aged 12 to 17, the usage rate was lower than the overall average. However, the prevalence nearly doubled for the young adult demographic. The study found that 22.2 percent of respondents aged 18 to 21 reported using these tools for mental health advice.
The frequency of use suggests that this is not merely a novelty for many users. Among those who reported consulting artificial intelligence for emotional support, 65.5 percent stated they did so on a monthly basis or more frequently. This repeated engagement implies a sustained reliance on the technology for coping with difficult emotions.
Participants generally viewed the advice they received in a positive light. The survey results indicated that 92.7 percent of users found the artificial intelligence responses to be somewhat or very helpful. This high satisfaction rate likely reinforces the behavior, encouraging continued use of the technology when negative emotions arise.
“I think it is important to recognize that adolescents are interacting with artificial intelligence and turning to these tools when they feel sad, angry, or nervous,” Cantor said. “They not only use these tools frequently but also perceive the advice they receive as helpful.”
Despite the overall positive reception, the researchers uncovered evidence of demographic disparities. Black respondents were significantly less likely to report that the advice was helpful compared to White non-Hispanic respondents.
This finding raises questions about the cultural competency of current artificial intelligence models. It suggests that the datasets used to train these systems may not adequately reflect or understand the experiences of diverse populations.
“It is important to highlight that Black adolescents were less likely to report finding the advice they received helpful,” Cantor said. “Our study could not determine the reasons for this difference, and future research should seek to explore and better understand this finding.”
The high utilization rates are likely driven by the low barrier to entry. Artificial intelligence chatbots are typically free or low-cost and are available at any time of day.
For young people who may feel stigmatized by traditional counseling or who cannot afford it, these tools offer a perceived safe harbor. The anonymity of a chatbot may encourage users to disclose feelings they would hide from a human therapist.
However, the researchers note that this trend is accompanied by significant risks. There are currently few standardized benchmarks for evaluating the quality or safety of mental health advice generated by artificial intelligence. The datasets used to train large language models are often opaque, making it difficult for experts to assess potential biases or inaccuracies.
Concerns regarding the reliability of these systems are not theoretical. The release of this study comes at a time when OpenAI faces legal challenges alleging that its products have contributed to harmful outcomes for some users. The potential for these systems to provide incorrect or inappropriate advice remains a critical issue for developers and health officials.
As with all research, there are limitations to consider. The sample size for the 18 to 21 age group was relatively small, consisting of 147 respondents. This limited number means that the specific estimates for this subgroup have a wider margin of error and should be interpreted with some caution. Additionally, the survey relied on self-reported data, which depends on the accuracy of the participants’ memories and honesty.
The survey did not gather information on whether the respondents had diagnosed mental health conditions. It is unclear if the users turning to artificial intelligence are those with severe clinical needs or those experiencing temporary emotional fluctuations. The study also did not capture the specific content of the advice sought or provided. Consequently, it is impossible to evaluate the clinical appropriateness of the guidance the chatbots offered.
“These results should not be interpreted as causal,” Cantor noted. “Our goal is simply to describe current patterns of use. More empirical research is needed to understand the relationship between adolescents’ use of artificial intelligence and their emotional well-being.”
Looking forward, “I think we should continue to include similar survey questions to track trends in how adolescents use artificial intelligence,” Cantor said. “It is also important to understand healthcare providers’ perspectives on incorporating artificial intelligence into the delivery of mental health care for adolescents.”
The study, “Use of Generative AI for Mental Health Advice Among US Adolescents and Young Adults,” was authored by Ryan K. McBain, Robert Bozick, Melissa Diliberti, Li Ang Zhang, Fang Zhang, Alyssa Burnett, Aaron Kofner, Benjamin Rader, Joshua Breslau, Bradley D. Stein, Ateev Mehrotra, Lori Uscher Pines, Jonathan Cantor, and Hao Yu.