PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

The scientist who predicted AI psychosis has issued another dire warning

by Eric W. Dolan
February 7, 2026
Reading Time: 5 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

More than two years ago, Danish psychiatrist Søren Dinesen Østergaard published a provocative editorial suggesting that the rise of conversational artificial intelligence could have severe mental health consequences. He proposed that the persuasive, human-like nature of chatbots might push vulnerable individuals toward psychosis.

At the time, the idea seemed speculative. In the months that followed, however, clinicians and journalists began documenting real-world cases that mirrored his concerns. Patients were developing fixed, false beliefs after marathon sessions with digital companions. Now, the scientist who foresaw the psychiatric risks of AI has issued a new warning. This time, he is not focusing on mental illness, but on a potential degradation of human intelligence itself.

In a new letter to the editor published in Acta Psychiatrica Scandinavica, Østergaard argues that academia and the sciences are facing a crisis of “cognitive debt.” He posits that the outsourcing of writing and reasoning to generative AI is eroding the fundamental skills required for scientific discovery. The commentary builds upon a growing body of evidence suggesting that while AI can mimic human output, relying on it may physically alter the brain’s ability to think.

Østergaard’s latest writing is a response to a letter by Professor Soichiro Matsubara. Matsubara had previously highlighted that AI chatbots might harm the writing abilities of young doctors and damage the mentorship dynamic in medicine. Østergaard agrees with this assessment but takes the argument a step further. He contends that the danger extends beyond mere writing skills and strikes at the core of the scientific process: reasoning.

The psychiatrist acknowledges the utility of AI for surface-level tasks. He notes that using a tool to proofread a manuscript for grammar is largely harmless. However, he points out that technology companies are actively marketing “reasoning models” designed to solve complex problems and plan workflows. While this sounds efficient, Østergaard suggests it creates a paradox. He questions whether the next generation of scientists will possess the cognitive capacity to make breakthroughs if they never practice the struggle of reasoning themselves.

To illustrate this point, he cites the developers of AlphaFold, an AI program that predicts protein structures. This technology resulted in the 2024 Nobel Prize in Chemistry for researchers from Google DeepMind and the University of Washington.

Østergaard argues that it is not a given that these specific scientists would have achieved such heights if generative AI had been available to do their thinking for them during their formative years. He suggests that scientific reasoning is not an innate talent. It is a skill learned through the rigorous, often tedious practice of reading, thinking, and revising.

The concept of “cognitive debt” is central to this new warning. Østergaard draws attention to a preprint study by Kosmyna and colleagues, titled “Your brain on ChatGPT.” This research attempts to quantify the neurological cost of using AI assistance. The study involved participants writing essays under three conditions: using ChatGPT, using a search engine, or using only their own brains.

Google News Preferences Add PsyPost to your preferred sources

The findings of the Kosmyna study provide physical evidence for Østergaard’s concerns. Electroencephalography (EEG) monitoring revealed that participants in the ChatGPT group showed substantially lower brain activation in networks typically engaged during cognitive tasks. The brain was simply doing less work. More alerting was the finding that this “weaker neural connectivity” persisted even when these participants switched to writing essays without AI.

The study also found that those who used the chatbot had significant difficulties recalling the content of the essays they had just produced. The authors of the paper concluded that the results demonstrate a pressing matter of a likely decrease in learning skills. Østergaard describes these findings as deeply concerning. He suggests that if AI use indeed causes such cognitive debt, the educational system may be in a difficult position.

This aligns with other recent papers regarding “cognitive offloading.” A commentary by Umberto León Domínguez published in Neuropsychology explores the idea of AI as a “cognitive prosthesis.” Just as a physical prosthesis replaces a limb, AI replaces mental effort. While this can be efficient, Domínguez warns that it prevents the stimulation of higher-order executive functions. If students do not engage in the mental gymnastics required to solve problems, those cognitive muscles may atrophy.

Real-world examples are already surfacing. Østergaard references a report from the Danish Broadcasting Corporation about a high school student who used ChatGPT to complete approximately 150 assignments. The student was eventually expelled. While this is an extreme case, Østergaard notes that widespread outsourcing is becoming the norm from primary school through graduate programs. He fears this will reduce the chances of exceptional minds emerging in the future.

The loss of critical thinking skills is not just a future risk but a present reality. A study by Michael Gerlich published in the journal Societies found a strong negative correlation between frequent AI tool usage and critical thinking abilities. The research indicated that younger individuals were particularly susceptible. Those who frequently offloaded cognitive tasks to algorithms performed worse on assessments requiring independent analysis and evaluation.

There is also the issue of false confidence. A study published in Computers in Human Behavior by Daniela Fernandes and colleagues found that while AI helped users score higher on logic tests, it also distorted their self-assessment. Participants consistently overestimated their performance. The technology acted as a buffer, masking their own lack of understanding. This creates a scenario where individuals feel competent because the machine is competent, leading to a disconnect between perceived and actual ability.

This intellectual detachment mirrors the emotional detachment Østergaard identified in his earlier work on AI psychosis. In his previous editorial, he warned that the “sycophantic” nature of chatbots—their tendency to agree with and flatter the user—could reinforce delusions. A user experiencing paranoia might find a willing conspirator in a chatbot, which confirms their false beliefs to keep the conversation going.

The mechanism is similar in the context of cognitive debt. The AI provides an easy, pleasing answer that satisfies the immediate need of the user, whether that need is emotional validation or a completed homework assignment. in both cases, the human user surrenders their agency to the algorithm. They stop testing reality or their own logic against the world, preferring the smooth, frictionless output of the machine.

Østergaard connects this loss of human capability to the ultimate risks of artificial intelligence. He cites Geoffrey Hinton, a Nobel laureate in physics often called the “godfather of AI.” Hinton has expressed concerns that there is a significant probability that AI could threaten humanity’s existence within the next few decades. Østergaard argues that facing such existential threats requires humans who are cognitively adept.

If the population becomes “cognitively indebted,” reliant on machines for basic reasoning, the ability to maintain control over those same machines diminishes. The psychiatrist emphasizes that we need humans in the loop who are capable of independent, rigorous thought. A society that has outsourced its reasoning to the very systems it needs to regulate may find itself ill-equipped to handle the consequences.

The warning is clear. The convenience of generative AI comes with a hidden cost. It is not merely a matter of students cheating on essays or doctors losing their writing flair. The evidence suggests a fundamental change in how the brain processes information. By skipping the struggle of learning and reasoning, humans may be sacrificing the very cognitive traits that allow for scientific advancement and independent judgment.

Østergaard was correct when he flagged the potential for AI to distort reality for psychiatric patients. His new commentary suggests that the distortion of our intellectual potential may be a far more widespread and insidious problem. As AI tools become more integrated into daily life, the choice between cognitive effort and cognitive offloading becomes a defining challenge for the future of human intelligence.

The paper, “Generative Artificial Intelligence (AI) and the Outsourcing of Scientific Reasoning: Perils of the Rising Cognitive Debt in Academia and Beyond,” was published January 21, 2026.

RELATED

AI-assisted venting can boost psychological well-being, study suggests
Addiction

Artificial intelligence tools answer addiction questions accurately but lack medical nuance

May 15, 2026
Musical expertise is associated with specific cognitive and personality traits beyond memory performance
Cognitive Science

From childhood to adulthood, musicians show small but reliable advantages in sustained attention

May 14, 2026
Brain scans identify the neural network that traps anxious people in cycles of self-blame
Cognitive Science

Women score higher than men on fluid intelligence tests when allowed to express uncertainty

May 14, 2026
Scientists trained AI to talk people out of conspiracy theories — and it worked surprisingly well
Artificial Intelligence

Real-world evidence shows generative AI is making human creative output more uniform

May 14, 2026
Right-wing authoritarianism appears to have a genetic foundation
Cognitive Science

Class background influences whether genetic predisposition for intelligence drives you left or right

May 13, 2026
Brain scans identify the neural network that traps anxious people in cycles of self-blame
Cognitive Science

The human brain processes the passage of time across three distinct stages

May 13, 2026
People with autistic traits show reduced attentional bias towards animals
Cognitive Science

Your eyes reveal how strongly you believe fake news before you even make a choice

May 13, 2026
Blue light exposure may counteract anxiety caused by chronic vibration
Addiction

AI-designed drug reduces fentanyl consumption in animal models by targeting serotonin receptors

May 12, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The human brain processes the passage of time across three distinct stages
  • Brain scans identify the neural network that traps anxious people in cycles of self-blame
  • New study finds sustainable living relies on stable personality traits, not temporary bursts of willpower
  • Brooding identified as a major driver of bedtime procrastination, alongside physical markers of stress
  • Scientists challenge The Body Keeps the Score with a new predictive model of trauma

Science of Money

  • What 120 studies reveal about financial literacy as a lever for economic inclusion
  • When illness leads to illegality: How a cancer diagnosis reshapes the decision to commit a crime
  • The Goldilocks zone of sales pressure: Why a little urgency helps and too much hurts
  • What women really want from “girl power” ads: Six ingredients that make femvertising work
  • The seductive allure of neuroscience: Why brain talk feels so satisfying, even when it explains nothing

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc