Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Conversational AI can increase false memory formation by injecting slight misinformation in conversations

by Vladimir Hedrih
January 7, 2026
in Artificial Intelligence, Memory
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

An experimental study in the United States found that having a conversational AI insert slight misinformation into conversations with users increased false memory occurrence and reduced memories of correct information. The research was published in IUI ’25: Proceedings of the 30th International Conference on Intelligent User Interfaces.

Human memory works through three main processes: encoding, storage, and retrieval, i.e., processes during which information is transformed, maintained, and later accessed in the brain (respectively). Encoding depends on attention and meaning, so information that is emotionally salient or well-organized is remembered better. Stored memories are not exact recordings of events; they can change over time. Retrieval of stored memories is a reconstructive process, meaning that memories are rebuilt each time they are recalled rather than simply replayed.

In these processes, exploiting the reconstructive nature of memory, false memories can form. False memories are recollections of events or details that feel real but are inaccurate or entirely fabricated. They are formed through suggestion, imagination, repeated questioning, social influence, or confusion between similar experiences. During retrieval, the brain tends to fill in gaps using expectations, prior knowledge, or external information, which then becomes integrated into the memory. Over time, these altered details can feel just as vivid and real as true memories.

Study author Pat Pataranutaporn and his colleagues examined the potential for malicious generative chatbots to induce false memories by injecting subtle misinformation during interactions with users. Previous studies indicated that there is a rise in AI-driven disinformation campaigns, in which AIs using an authoritative tone, persuasive language, and targeted personalization contributed to users’ difficulties in distinguishing between true and false information. Earlier studies also showed that AI-generated content can influence people’s beliefs and attitudes.

Study participants were 180 individuals recruited via CloudResearch. Participants’ average age was 35 years. The numbers of female and male participants were equal.

Study authors randomly assigned each participant to read one of three information articles. One article was about elections in Thailand, another about drug development, and the third was about shoplifting in the U.K. This was followed by a short filler task. After this, participants were again randomly allocated to five different conditions. There were, in total, 36 participants per condition with 12 per article.

The experimental conditions were the control condition (no intervention) and four intervention conditions that included interaction with an AI (gpt-4o-2024-08-06, a large language model). Of these four conditions, two included reading an AI-generated summary of the article, while the other two included engaging in a discussion with the AI. In each pair of conditions, the AI was honest in one (i.e., correctly presenting facts from the articles) and misleading in the other (i.e., incorporating misinformation alongside factual points).

After undergoing their assigned intervention, participants answered questions, recalling whether specific points appeared in the original article. The questionnaire consisted of 15 questions, 10 of which were about key points from the article, while 5 were about misinformation.

Google News Preferences Add PsyPost to your preferred sources

For each question, participants could answer with Yes, No, or Unsure, and they also rated their confidence in the answer. Participants also self-reported their familiarity with AI, evaluated their own general memory performance, and rated their ability to remember visual and verbal information. Finally, participants rated the level of distrust they felt towards official information.

Results showed that participants who engaged in discussion with a misleading chatbot recalled the highest number of false memories compared to all the other conditions and recalled the lowest number of non-false memories. The number of false memories recalled by individuals who read a misleading summary was slightly higher compared to control and honest conditions, but the difference was not large enough to be sure that those were not just random variations. The situation was similar for non-false memories.

Similarly, participants who conversed with a misleading chatbot displayed lower confidence in their non-false memories compared to participants in honest conditions. Overall, participants who conversed with a misleading chatbot had the lowest confidence in their recalled memories compared to all the other treatment conditions.

“The findings revealed that LLM-driven [large language model-driven] interventions heighten false memory creation, with misleading chatbots generating the most pronounced misinformation effect. This points to a worrying capacity for language models to introduce false beliefs in their users. Moreover, these interventions not only fostered false memories but also diminished participants’ confidence in recalling accurate information,” the study authors concluded.

The study contributes to the scientific understanding of false memories. However, it should be noted that the study focused on immediate recall of memories with no particular personal relevance for study participants.

Also, their information came from just a single source, which was the article they read. This is profoundly different from real-world information acquisition, where individuals most often gather information from multiple competing sources, process them based on the level of trust they have in the sources and various other factors, and are able to verify points directly relevant to them.

The paper, “Slip Through the Chat: Subtle Injection of False Information in LLM Chatbot Conversations Increases False Memory Formation,” was authored by Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha Chan, Elizabeth F. Loftus, and Pattie Maes.

Previous Post

Voters from both parties largely agree on how to punish acts of political violence

Next Post

Psychologists identify a potential bridge between narcissism and OCD

RELATED

People ascribe intentions and emotions to both human- and AI-made art, but still report stronger emotions for artworks made by humans
Artificial Intelligence

New research links personality traits to confidence in recognizing artificial intelligence deception

April 13, 2026
Scientists just found a novel way to uncover AI biases — and the results are unexpected
Artificial Intelligence

Artificial intelligence makes consumers more impatient

April 11, 2026
Scientists identify a fat-derived hormone that drives the mood benefits of exercise
Artificial Intelligence

People consistently devalue creative writing generated by artificial intelligence

April 5, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Job seekers mask their emotions and act more analytical when evaluated by artificial intelligence

April 3, 2026
Cannabis intoxication broadly impairs multiple memory types, new study shows
Cannabis

Cannabis intoxication broadly impairs multiple memory types, new study shows

April 3, 2026
AI autocomplete suggestions covertly change how users think about important topics
Artificial Intelligence

AI autocomplete suggestions covertly change how users think about important topics

April 2, 2026
Study links phubbing sensitivity to attachment patterns in romantic couples
Artificial Intelligence

How generative artificial intelligence is upending theories of political persuasion

April 1, 2026
People with attachment anxiety are more vulnerable to problematic AI use
Artificial Intelligence

Relying on AI chatbots for historical facts can influence your political beliefs, new study shows

March 30, 2026

STAY CONNECTED

RSS Psychology of Selling

  • When happy customers and happy employees don’t add up: How investor signals have shifted in the social media age
  • Correcting fake news about brands does not backfire, five-study experiment finds
  • Should your marketing tell a story or state the facts? A massive meta-analysis has answers
  • When brands embrace diversity, some customers pull away — and new research explains why
  • Smaller influencers drive engagement while bigger ones drive purchases, meta-analysis finds

LATEST

This Mediterranean‑style diet is linked to a slower loss of brain volume as we age

Psychologists map out the pathways connecting sacred beliefs to better sex

Why thinking hard feels bad: the emotional root of deliberation

New study links watching TikTok “thirst traps” to lower relationship trust and satisfaction

Ketone esters show promise as a new treatment for alcohol use disorder

Psychedelic therapy and traditional antidepressants show similar results under open-label conditions

Romances with narcissists don’t deteriorate the way psychologists expected

New research links personality traits to confidence in recognizing artificial intelligence deception

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc