Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

by Eric W. Dolan
February 7, 2026
in Artificial Intelligence, Cognitive Science
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

More than two years ago, Danish psychiatrist Søren Dinesen Østergaard published a provocative editorial suggesting that the rise of conversational artificial intelligence could have severe mental health consequences. He proposed that the persuasive, human-like nature of chatbots might push vulnerable individuals toward psychosis.

At the time, the idea seemed speculative. In the months that followed, however, clinicians and journalists began documenting real-world cases that mirrored his concerns. Patients were developing fixed, false beliefs after marathon sessions with digital companions. Now, the scientist who foresaw the psychiatric risks of AI has issued a new warning. This time, he is not focusing on mental illness, but on a potential degradation of human intelligence itself.

In a new letter to the editor published in Acta Psychiatrica Scandinavica, Østergaard argues that academia and the sciences are facing a crisis of “cognitive debt.” He posits that the outsourcing of writing and reasoning to generative AI is eroding the fundamental skills required for scientific discovery. The commentary builds upon a growing body of evidence suggesting that while AI can mimic human output, relying on it may physically alter the brain’s ability to think.

Østergaard’s latest writing is a response to a letter by Professor Soichiro Matsubara. Matsubara had previously highlighted that AI chatbots might harm the writing abilities of young doctors and damage the mentorship dynamic in medicine. Østergaard agrees with this assessment but takes the argument a step further. He contends that the danger extends beyond mere writing skills and strikes at the core of the scientific process: reasoning.

The psychiatrist acknowledges the utility of AI for surface-level tasks. He notes that using a tool to proofread a manuscript for grammar is largely harmless. However, he points out that technology companies are actively marketing “reasoning models” designed to solve complex problems and plan workflows. While this sounds efficient, Østergaard suggests it creates a paradox. He questions whether the next generation of scientists will possess the cognitive capacity to make breakthroughs if they never practice the struggle of reasoning themselves.

To illustrate this point, he cites the developers of AlphaFold, an AI program that predicts protein structures. This technology resulted in the 2024 Nobel Prize in Chemistry for researchers from Google DeepMind and the University of Washington.

Østergaard argues that it is not a given that these specific scientists would have achieved such heights if generative AI had been available to do their thinking for them during their formative years. He suggests that scientific reasoning is not an innate talent. It is a skill learned through the rigorous, often tedious practice of reading, thinking, and revising.

The concept of “cognitive debt” is central to this new warning. Østergaard draws attention to a preprint study by Kosmyna and colleagues, titled “Your brain on ChatGPT.” This research attempts to quantify the neurological cost of using AI assistance. The study involved participants writing essays under three conditions: using ChatGPT, using a search engine, or using only their own brains.

Google News Preferences Add PsyPost to your preferred sources

The findings of the Kosmyna study provide physical evidence for Østergaard’s concerns. Electroencephalography (EEG) monitoring revealed that participants in the ChatGPT group showed substantially lower brain activation in networks typically engaged during cognitive tasks. The brain was simply doing less work. More alerting was the finding that this “weaker neural connectivity” persisted even when these participants switched to writing essays without AI.

The study also found that those who used the chatbot had significant difficulties recalling the content of the essays they had just produced. The authors of the paper concluded that the results demonstrate a pressing matter of a likely decrease in learning skills. Østergaard describes these findings as deeply concerning. He suggests that if AI use indeed causes such cognitive debt, the educational system may be in a difficult position.

This aligns with other recent papers regarding “cognitive offloading.” A commentary by Umberto León Domínguez published in Neuropsychology explores the idea of AI as a “cognitive prosthesis.” Just as a physical prosthesis replaces a limb, AI replaces mental effort. While this can be efficient, Domínguez warns that it prevents the stimulation of higher-order executive functions. If students do not engage in the mental gymnastics required to solve problems, those cognitive muscles may atrophy.

Real-world examples are already surfacing. Østergaard references a report from the Danish Broadcasting Corporation about a high school student who used ChatGPT to complete approximately 150 assignments. The student was eventually expelled. While this is an extreme case, Østergaard notes that widespread outsourcing is becoming the norm from primary school through graduate programs. He fears this will reduce the chances of exceptional minds emerging in the future.

The loss of critical thinking skills is not just a future risk but a present reality. A study by Michael Gerlich published in the journal Societies found a strong negative correlation between frequent AI tool usage and critical thinking abilities. The research indicated that younger individuals were particularly susceptible. Those who frequently offloaded cognitive tasks to algorithms performed worse on assessments requiring independent analysis and evaluation.

There is also the issue of false confidence. A study published in Computers in Human Behavior by Daniela Fernandes and colleagues found that while AI helped users score higher on logic tests, it also distorted their self-assessment. Participants consistently overestimated their performance. The technology acted as a buffer, masking their own lack of understanding. This creates a scenario where individuals feel competent because the machine is competent, leading to a disconnect between perceived and actual ability.

This intellectual detachment mirrors the emotional detachment Østergaard identified in his earlier work on AI psychosis. In his previous editorial, he warned that the “sycophantic” nature of chatbots—their tendency to agree with and flatter the user—could reinforce delusions. A user experiencing paranoia might find a willing conspirator in a chatbot, which confirms their false beliefs to keep the conversation going.

The mechanism is similar in the context of cognitive debt. The AI provides an easy, pleasing answer that satisfies the immediate need of the user, whether that need is emotional validation or a completed homework assignment. in both cases, the human user surrenders their agency to the algorithm. They stop testing reality or their own logic against the world, preferring the smooth, frictionless output of the machine.

Østergaard connects this loss of human capability to the ultimate risks of artificial intelligence. He cites Geoffrey Hinton, a Nobel laureate in physics often called the “godfather of AI.” Hinton has expressed concerns that there is a significant probability that AI could threaten humanity’s existence within the next few decades. Østergaard argues that facing such existential threats requires humans who are cognitively adept.

If the population becomes “cognitively indebted,” reliant on machines for basic reasoning, the ability to maintain control over those same machines diminishes. The psychiatrist emphasizes that we need humans in the loop who are capable of independent, rigorous thought. A society that has outsourced its reasoning to the very systems it needs to regulate may find itself ill-equipped to handle the consequences.

The warning is clear. The convenience of generative AI comes with a hidden cost. It is not merely a matter of students cheating on essays or doctors losing their writing flair. The evidence suggests a fundamental change in how the brain processes information. By skipping the struggle of learning and reasoning, humans may be sacrificing the very cognitive traits that allow for scientific advancement and independent judgment.

Østergaard was correct when he flagged the potential for AI to distort reality for psychiatric patients. His new commentary suggests that the distortion of our intellectual potential may be a far more widespread and insidious problem. As AI tools become more integrated into daily life, the choice between cognitive effort and cognitive offloading becomes a defining challenge for the future of human intelligence.

The paper, “Generative Artificial Intelligence (AI) and the Outsourcing of Scientific Reasoning: Perils of the Rising Cognitive Debt in Academia and Beyond,” was published January 21, 2026.

Previous Post

Support for banning hate speech tends to decrease as people get older

Next Post

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

RELATED

Scientists just uncovered a major limitation in how AI models understand truth and belief
Artificial Intelligence

The bystander effect applies to virtual agents, new psychology research shows

March 12, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Cognitive Science

The hidden brain benefit of getting in shape that scientists just discovered

March 11, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Genetic factors drive the link between cognitive ability and socioeconomic status

March 10, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Everyday mental quirks like déjà vu might be natural byproducts of a resting mind

March 10, 2026
Scientists use “dream engineering” to boost creative problem-solving during REM sleep
Cognitive Science

Scientists use “dream engineering” to boost creative problem-solving during REM sleep

March 10, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

Therapists test an AI dating simulator to help chronically single men practice romantic skills

March 9, 2026
Researchers identify two psychological traits that predict conspiracy theory belief
Artificial Intelligence

Brain-controlled assistive robots work best when they share the workload with users

March 8, 2026
How common is anal sex? Scientific facts about prevalence, pain, pleasure, and more
Cognitive Science

New psychology research reveals that wisdom acts as a moral compass for creative thinking

March 6, 2026

STAY CONNECTED

LATEST

How sexual orientation stereotypes keep men out of early childhood education

Your personality and upbringing predict if you will lean toward science or faith

Veterans are no more likely than the general public to support political violence

People with social anxiety are less likely to experience a post-sex emotional glow

The extreme male brain theory of autism applies more strongly to females

A newly discovered brain cluster acts as an on and off switch for sex differences

Researchers identify personality traits that predict alcohol relapse after treatment

New study links the fatigue of depression to overworked cellular power plants

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc