Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Conversational AI can increase false memory formation by injecting slight misinformation in conversations

by Vladimir Hedrih
January 7, 2026
in Artificial Intelligence, Memory
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

An experimental study in the United States found that having a conversational AI insert slight misinformation into conversations with users increased false memory occurrence and reduced memories of correct information. The research was published in IUI ’25: Proceedings of the 30th International Conference on Intelligent User Interfaces.

Human memory works through three main processes: encoding, storage, and retrieval, i.e., processes during which information is transformed, maintained, and later accessed in the brain (respectively). Encoding depends on attention and meaning, so information that is emotionally salient or well-organized is remembered better. Stored memories are not exact recordings of events; they can change over time. Retrieval of stored memories is a reconstructive process, meaning that memories are rebuilt each time they are recalled rather than simply replayed.

In these processes, exploiting the reconstructive nature of memory, false memories can form. False memories are recollections of events or details that feel real but are inaccurate or entirely fabricated. They are formed through suggestion, imagination, repeated questioning, social influence, or confusion between similar experiences. During retrieval, the brain tends to fill in gaps using expectations, prior knowledge, or external information, which then becomes integrated into the memory. Over time, these altered details can feel just as vivid and real as true memories.

Study author Pat Pataranutaporn and his colleagues examined the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during interactions with users. Previous studies indicated that there is a rise in AI-driven disinformation campaigns, in which AIs using an authoritative tone, persuasive language, and targeted personalization contributed to users’ difficulties in distinguishing between true and false information. Earlier studies also showed that AI-generated content can influence people’s beliefs and attitudes.

Study participants were 180 individuals recruited via CloudResearch. Participants’ average age was 35 years. The numbers of female and male participants were equal.

Study authors randomly assigned each participant to read one of three information articles. One article was about elections in Thailand, another about drug development, and the third was about shoplifting in the U.K. This was followed by a short filler task. After this, participants were again randomly allocated to five different conditions. There were, in total, 36 participants per condition with 12 per article.

The experimental conditions were the control condition (no intervention) and four intervention conditions that included interaction with an AI (gpt-4o-2024-08-06, a large language model). Of these four conditions, two included reading an AI-generated summary of the article, while the other two included engaging in a discussion with the AI. In each pair of conditions, the AI was honest in one (i.e., correctly presenting facts from the articles) and misleading in the other (i.e., incorporating misinformation alongside factual points).

After undergoing their assigned intervention, participants answered questions, recalling whether specific points appeared in the original article. The questionnaire consisted of 15 questions, 10 of which were about key points from the article, while 5 were about misinformation.

For each question, participants could answer with Yes, No, or Unsure, and they also rated their confidence in the answer. Participants also self-reported their familiarity with AI, evaluated their own general memory performance, and rated their ability to remember visual and verbal information. Finally, participants rated the level of distrust they felt towards official information.

Results showed that participants who engaged in discussion with a misleading chatbot recalled the highest number of false memories compared to all the other conditions and recalled the lowest number of non-false memories. The number of false memories recalled by individuals who read a misleading summary was slightly higher compared to control and honest conditions, but the difference was not large enough to be sure that those were not just random variations. The situation was similar for non-false memories.

Similarly, participants who conversed with a misleading chatbot displayed lower confidence in their non-false memories compared to participants in honest conditions. Overall, participants who conversed with a misleading chatbot had the lowest confidence in their recalled memories compared to all the other treatment conditions.

“The findings revealed that LLM-driven [large language model-driven] interventions heighten false memory creation, with misleading chatbots generating the most pronounced misinformation effect. This points to a worrying capacity for language models to introduce false beliefs in their users. Moreover, these interventions not only fostered false memories but also diminished participants’ confidence in recalling accurate information,” the study authors concluded.

The study contributes to the scientific understanding of false memories. However, it should be noted that the study focused on immediate recall of memories with no particular personal relevance for study participants.

Also, their information came from just a single source, which was the article they read. This is profoundly different from real-world information acquisition, where individuals most often gather information from multiple competing sources, process them based on the level of trust they have in the sources and various other factors, and are able to verify points directly relevant to them.

The paper, “Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation,” was authored by Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha Chan, Elizabeth F. Loftus, and Pattie Maes.

RELATED

AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

January 19, 2026
Google searches for racial slurs are higher in areas where people are worried about disease
Artificial Intelligence

Learning from AI summaries leads to shallower knowledge than web search

January 17, 2026
Neuroscientists find evidence meditation changes how fluid moves in the brain
Artificial Intelligence

Scientists show humans can “catch” fear from a breathing robot

January 16, 2026
Poor sleep may shrink brain regions vulnerable to Alzheimer’s disease, study suggests
Artificial Intelligence

How scientists are growing computers from human brain cells – and why they want to keep doing it

January 11, 2026
Alcohol use disorder may exacerbate Alzheimer’s disease through shared genetic pathways
Memory

Random signals in support cells help cement long-term memories

January 10, 2026
Misinformation thrives on outrage, study finds
Artificial Intelligence

The psychology behind the deceptive power of AI-generated images on Facebook

January 8, 2026
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

Simple anthropomorphism can make an AI advisor as trusted as a romantic partner

January 5, 2026
Legalized sports betting linked to a rise in violent crimes and property theft
Artificial Intelligence

The psychology behind our anxiety toward black box algorithms

January 2, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Depression’s impact on fairness perceptions depends on socioeconomic status

Early life adversity primes the body for persistent physical pain, new research suggests

Economic uncertainty linked to greater male aversion to female breadwinning

Women tend to downplay their gender in workplaces with masculinity contest cultures

Young people show posttraumatic growth after losing a parent, finding strength, meaning, and appreciation for life

MDMA-assisted therapy shows promise for long-term depression relief

Neuroscience study reveals that familiar rewards trigger motor preparation before a decision is made

Emotional abuse predicts self-loathing more strongly than other childhood traumas

RSS Psychology of Selling

  • How defending your opinion changes your confidence
  • The science behind why accessibility drives revenue in the fashion sector
  • How AI and political ideology intersect in the market for sensitive products
  • Researchers track how online shopping is related to stress
  • New study reveals why some powerful leaders admit mistakes while others double down
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy