Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Artificial intelligence can seem more human than actual humans on social media, study finds

by Eric W. Dolan
July 22, 2023
in Artificial Intelligence, Social Psychology
(Photo credit: Adobe Stock)

(Photo credit: Adobe Stock)

Share on TwitterShare on Facebook

A new study suggests that OpenAI’s GPT-3 can both inform and disinform more effectively than real people on social media. The research, published in Science Advances, also highlights the challenges of identifying synthetic (AI-generated) information, as GPT-3 can mimic human writing so well that people have difficulty telling the difference.

The study was motivated by the increasing attention and interest in AI text generators, particularly after the release of OpenAI’s GPT-3 in 2020. GPT-3 is a cutting-edge AI language model that can produce highly credible and realistic texts based on user prompts. It can be used for various beneficial applications, such as translation, dialogue systems, question answering, and creative writing.

However, there are also concerns about its potential misuse, particularly in generating disinformation, fake news, and misleading content, which could have harmful effects on society, especially during the ongoing infodemic of fake news and disinformation alongside the COVID-19 pandemic.

“Our research group is dedicated to understanding the impact of scientific disinformation and ensuring the safe engagement of individuals with information,” explained study author Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine and director of Culturico.

“We aim to mitigate the risks associated with false information on individual and public health. The emergence of AI models like GPT-3 sparked our interest in exploring how AI influences the information landscape and how people perceive and interact with information and misinformation.”

To conduct the study, the researchers focused on 11 topics prone to disinformation, including climate change, vaccine safety, COVID-19, and 5G technology. They generated synthetic tweets using GPT-3 for each of these topics, creating both true and false tweets. Additionally, they collected a random sample of real tweets from Twitter on the same topics, including both true and false ones.

Next, the researchers employed expert assessment to determine whether the synthetic and organic tweets contained disinformation. They selected a subset of tweets for each category (synthetic false, synthetic true, organic false, and organic true) based on the expert evaluation.

They then programmed a survey using the Qualtrics platform to collect data from 697 participants. Most of the respondents were from the United Kingdom, Australia, Canada, United States, and Ireland. The survey displayed the tweets to respondents, who had to determine whether each tweet contained accurate information or disinformation and whether it was written by a real person or generated by an AI. The survey used a gamified approach to keep respondents engaged.

Google News Preferences Add PsyPost to your preferred sources

The researchers found that people were better at recognizing disinformation in “organic false” tweets (written by real users) compared to “synthetic false” tweets (generated by GPT-3). In other words, people were more likely to identify false information when it came from real users on Twitter.

“One noteworthy finding was that disinformation generated by AI was more convincing than that produced by humans,” Germani said.

On the other hand, people were more likely to correctly recognize accurate information in “synthetic true” tweets (generated by GPT-3) compared to “organic true” tweets (written by real users). This means that when GPT-3 produced accurate information, people were more likely to identify it as true compared to accurate information written by real users.

The study also revealed that people had a hard time distinguishing between tweets written by real users and those generated by GPT-3. GPT-3 was able to mimic human writing styles and language patterns so effectively that people could not easily tell the difference.

“The most surprising discovery was that participants often perceived information produced by AI as more likely to come from a human, more often than information produced by an actual person. This suggests that AI can convince you of being a real person more than a real person can convince you of being a real person, which is a fascinating side finding of our study,” Germani told PsyPost.

“Our study emphasizes the challenge of differentiating between information generated by AI and that created by humans. It highlights the importance of critically evaluating the information we receive and placing trust in reliable sources. Additionally, I would encourage individuals to familiarize themselves with these emerging technologies to grasp their potential, both positive and negative.”

The researchers also observed that GPT-3 sometimes refused to generate disinformation while, in other cases, it produced disinformation even when instructed to generate accurate information.

“It’s important to note that our study was conducted in a controlled experimental environment. While it raises concerns about the effectiveness of AI in generating persuasive disinformation, we have yet to fully understand the real-world implications,” Germani said.

“Addressing this requires conducting larger-scale studies on social media platforms to observe how people interact with AI-generated information and how these interactions influence behavior and adherence to recommendations for individual and public health.”

The study, “AI model GPT-3 (dis)informs us better than humans“, was authored by Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani.

Previous Post

New study finds small reductions in social media use are linked to improvements in health and well-being

Next Post

Highly dominant individuals are endorsed as leaders when threat of conflict and disorder is high, but not when it is low

RELATED

Young children are more likely to trust information from robots over humans
Artificial Intelligence

The presence of robot eyes affects perception of mind

February 21, 2026
Men and women tend to read sexual assault victims’ emotions differently, study finds
Sexism

Men and women tend to read sexual assault victims’ emotions differently, study finds

February 21, 2026
People who engage in impulsive violence tend to have lower IQ scores
Social Psychology

Researchers discovered a surprising link between ignored hostility and crime

February 21, 2026
Psychology study reveals a fascinating fact about artwork
Artificial Intelligence

AI art fails to trigger the same empathy as human works

February 20, 2026
Men in relationships have better sexual functioning, regardless of sexual orientation, study finds
Relationships and Sexual Health

New research highlights the enduring distinctiveness of marriage

February 20, 2026
What is a femcel? The psychology and culture of female involuntary celibates
Social Psychology

What is a femcel? The psychology and culture of female involuntary celibates

February 20, 2026
Emotionally intelligent women use more emojis when communicating with friends
Business

New study sheds light on the psychological burden of having a massive social media audience

February 20, 2026
Mental illness doesn’t explain who owns or carries guns
Political Psychology

Rising number of Americans report owning firearms for protection at public political events

February 18, 2026

STAY CONNECTED

LATEST

Shingles vaccine linked to slower biological aging, but brain markers show no change

The presence of robot eyes affects perception of mind

Psychological capital mitigates the impact of interpersonal sensitivity on anxiety in future nurses

Men and women tend to read sexual assault victims’ emotions differently, study finds

Researchers discovered a surprising link between ignored hostility and crime

A popular weight loss drug shows promise for treating alcohol addiction

How unemployment changes the way people dream

Girls rarely experience the “friend zone,” psychology study finds

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc