A new study published in Scientific Reports has introduced a promising diagnostic tool that could dramatically shorten the long wait times many families face when seeking evaluations for autism and attention-related conditions. The research team used artificial intelligence to analyze subtle patterns in how people move their hands during simple tasks, identifying with surprising accuracy whether someone is likely to have autism, attention-deficit traits, or both. The method, which relies on wearable motion sensors and deep learning, could one day serve as a rapid and objective screening tool to help clinicians triage children for further assessment.
Autism and attention-deficit disorders are both classified as neurodevelopmental conditions, meaning they affect how the brain grows and functions. Autism spectrum conditions are typically identified by challenges in social interaction, communication differences, and repetitive behaviors. Attention-deficit disorders, which may include hyperactivity, are marked by difficulties with focus, impulsivity, and sustained attention.
While these are distinct diagnoses, they often co-occur in the same individual. In fact, up to 70 percent of people diagnosed with autism also show signs of attention-related difficulties. Despite their prevalence, diagnosing these conditions remains a long and complex process, often involving interviews, questionnaires, and behavioral observations that can take months or even years to schedule.
The researchers, led by Jorge José, sought to develop a new way to detect neurodevelopmental differences using objective data. They focused on a very basic behavior: the motion of reaching for something. Although such movement may seem simple, research has shown that subtle patterns in how someone moves their body can reveal a great deal about how their brain processes information.
Prior studies have suggested that children with autism or attention problems often show differences in motor planning and coordination, even during early development. These insights led the researchers to ask whether high-resolution motion tracking, paired with machine learning, could detect patterns unique to different neurodevelopmental conditions.
“I was trained as a theoretical physicist. But during the last 20 years I have been working in quantitative neuroscience. I now have a lab. We started by studying the visual cortex in non-human primates. They performed the simple process of reaching for targets appearing randomly on a computer touch screen. This is the same protocol we have used in our recent paper,” explained José, the James H. Rudy Distinguished Professor of Physics at Indiana University.
“In our 2013 autism paper, we found that by looking at natural movements at millisecond time scales, away from naked-eye detection, each participant’s movements were totally different, with autism being more random than neurotypical. This led us later in 2018, in another paper, to study in more detail the characteristics of ASD participants, uncovering a quantitative biomarker that correlates directly with the qualitative diagnostic tools used by providers.”
“In our present paper, we have extended the participant pool to include those with ASD, ADHD, comorbid ADHD+ASD, and neurotypical controls,” José said. “Importantly, we developed deep learning techniques that allow for the diagnosis of new participants with high accuracy and in just a few minutes. By looking at their movements at such fine time scales, we can also assess the degree of severity quantitatively.”
For their new study, the researchers recruited 92 participants, including individuals with autism, attention-deficit traits, both conditions, and a comparison group with no diagnosis. The participants ranged in age from children to young adults. Seventeen additional individuals were excluded from the final analysis due to problems completing the task, motor impairments unrelated to the conditions under study, or technical issues with the sensors.
All participants were able to perform a basic reaching task that required them to touch a target on a screen, withdraw their hand, and repeat the motion about 100 times. The motion of the dominant hand was tracked using a high-definition sensor placed in a glove, which recorded data at millisecond resolution.
The study used two complementary approaches to analyze the data. First, the researchers applied a deep learning technique to the raw motion data. The artificial neural network, trained on thousands of movement trials, learned to classify each participant as either autistic, having attention-deficit traits, both, or neither. The network architecture relied on a type of model well-suited for time-based sequences, called Long Short-Term Memory cells, which can capture both short and long-range patterns in data.
The researchers carefully trained and validated the model using cross-validation and tested its performance on data it had not seen before. When multiple types of movement data were combined—such as the hand’s angle, speed, and acceleration—the model achieved diagnostic accuracy around 70 percent.
José was surprised by “the fact that deep learning could detect inherent cognitive information properties about humans with high accuracy.”
In addition to classification accuracy, the researchers looked at another standard machine learning measure known as the Area Under the Receiver Operating Characteristic Curve, or AUC. This metric assesses how well a model can distinguish between categories. The model performed particularly well in identifying neurotypical individuals, with AUC scores as high as 0.95. Distinguishing between autism and attention-deficit traits proved more challenging, especially for individuals with both conditions, which is a known difficulty even in clinical settings.
To better understand the underlying movement differences, the team also conducted a second analysis focused on the statistical properties of the motion data. After filtering out electronic noise from the sensor signals, they measured how much randomness or variability was present in each person’s movement.
Two key statistical tools were used: the Fano Factor, which assesses variability relative to the mean, and Shannon Entropy, which measures the unpredictability in a signal. These measures provided a numerical fingerprint of each participant’s movement style, and higher levels of randomness tended to align with greater symptom severity as rated by clinicians.
The researchers found that individuals diagnosed with both autism and attention-deficit traits often showed intermediate levels of movement variability, overlapping with both neurotypical participants and those with only one diagnosis. Those with more severe autism symptoms had the most distinctive motion patterns, which matched well with previous research linking motor differences to the condition.
Interestingly, the biometrics were stable across the session, meaning that just 30 to 60 trials were often enough to get reliable measurements. This suggests that the method could be feasible for real-world screening, without requiring extensive testing.
The new study provides evidence “that the inherently random nature of human movements, when looked at times imperceptible to human eyes, contain important information about their cognitive abilities,” José told PsyPost.
While the study’s results are promising, the researchers acknowledge some limitations. The sample size, though larger than many previous studies of its kind, was still relatively small. In particular, the groups with attention-deficit traits alone were not as well represented, which could influence how well the model generalizes to broader populations.
“These are preliminary results that need to further validated in much larger groups of neurodivergent participants,” José noted.
The study also did not control for whether participants were taking medications, such as stimulants or other treatments, which could affect movement. Additionally, deep learning models can be difficult to interpret, and while the researchers took steps to visualize the importance of different input features, the decision-making process of the algorithm remains something of a black box.
Despite these limitations, the study offers a promising proof-of-concept for a new direction in mental health diagnostics. By using affordable, wearable sensors and machine learning, clinicians may one day be able to get early indications of neurodevelopmental differences without waiting for a full psychiatric evaluation.
Such tools could be especially useful in rural areas or regions with limited access to specialists. The researchers are hopeful that with larger datasets, their system can be refined and extended to track changes over time, such as how motion patterns respond to treatment or development.
Looking ahead, the team plans to expand the approach to study how these motion-based metrics evolve with age or medication use. The ultimate goal is not to replace traditional diagnosis, but to provide clinicians with an additional tool to support early detection and personalized care.
“We hope that our protocols can become another tool that can used by providers to have early assessments about the condition of neurodivergent participants which is in great need today,” José said.
The study, “Deep learning diagnosis plus kinematic severity assessments of neurodivergent disorders,” was authored by Khoshrav P. Doctor, Chaundy McKeever, Di Wu, Aditya Phadnis, Martin H. Plawecki, John I. Nurnberger Jr., and Jorge V. José.