PsyPost
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
Join
My Account
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI revolution or ethical dilemma? What 4.2 million tweets reveal about public perception of ChatGPT

by Vladimir Hedrih
October 5, 2024
Reading Time: 3 mins read
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

An analysis of millions of English-language tweets discussing ChatGPT in the first three months after its launch revealed that while the general public expressed excitement over this powerful new tool, there was also concern about its potential for misuse. Negative opinions raised questions about its credibility, possible biases, ethical issues, and concerns related to the employment rights of data annotators and programmers. On the other hand, positive views highlighted excitement about its potential use in various fields. The paper was published in PLOS ONE.

ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user input. It was introduced to the public in November 2022 as part of the GPT-3.5 architecture and later enhanced with versions like GPT-4. ChatGPT can perform various tasks, such as answering questions, providing explanations, generating text, offering advice, and assisting with problem-solving. It uses deep learning techniques to predict the most relevant responses, enabling it to engage in interactive conversations on a wide range of topics. The model was trained on large datasets, including books, articles, and online content, which allow it to generate coherent and contextually appropriate responses.

While useful in many areas, ChatGPT has limitations, such as occasionally providing inaccurate or biased information or generating completely fabricated responses (known as AI hallucinations). It has been applied in diverse fields, such as education, customer service, and content creation.

When ChatGPT was first introduced, its popularity soared, with its user base reaching 100 million individuals in the first month. Since then, many new AI language models have been developed by various companies. However, it could be argued that ChatGPT sparked the AI revolution in the workplace, generating widespread discussions about AI and prompting people to form varied opinions about its impact.

Researchers Reuben Ng and Ting Yu Joanne Chow aimed to analyze the enthusiasm and emotions surrounding the initial public perceptions of ChatGPT. They examined a dataset containing 4.2 million tweets that mentioned ChatGPT as a keyword, published between December 1, 2022, and March 1, 2023—essentially, the first three months after ChatGPT’s launch. The researchers sought to identify the issues and themes most frequently discussed and the most commonly used keywords and sentiments in tweets about ChatGPT.

The study analyzed the dataset in two ways. First, the researchers focused on identifying significant spikes in Twitter activity, or periods when the number of tweets, replies, and retweets about ChatGPT was notably high, and they analyzed what users were saying during those times. They collected and analyzed the top 100 most-engaged tweets from these periods. Second, they identified the top keywords each week that expressed positive, neutral, or negative sentiments about ChatGPT.

The results showed that there were 23 peaks in Twitter activity during the study period. The first peak occurred when ChatGPT surpassed 5 million users just five days after its launch, reflecting both the initial buzz and hesitancy surrounding the new tool. The second peak was primarily focused on discussions about ChatGPT’s potential uses. Subsequent peaks explored its utility in academic settings, detection of bias, philosophical thought experiments, discussions of its moral permissibility, and its role as a mirror to humanity.

The analysis of keywords revealed that the most frequent negative terms expressed concerns about ChatGPT’s credibility (e.g., hallucinated, crazy loop, cognitive dissonance, limited knowledge, simple mistakes, overconfidence, misleading), implicit bias in generated responses (e.g., bias, misleading, political bias, wing bias, religious bias), environmental ethics (e.g., fossil fuels), the employment rights of data annotators (e.g., outsourced workers, investigation), and adjacent debates about whether using a neural network trained on existing human works is ethical (e.g., stolen artwork, minimal effort).

Google News Preferences Add PsyPost to your preferred sources

Positive and neutral keywords expressed excitement about the general possibilities (e.g., huge breakthrough, biggest tech innovation), particularly in coding (e.g., good debugging companion, insanely useful, code), as a creative tool (e.g., content creation superpower, copywriters), in education (e.g., lesson plans, essays, undergraduate paper, academic purposes, grammar checker), and for personal use (e.g., workout plan, meal plan, calorie targets, personalized meeting templates).

“Overall, sentiments and themes were double-edged, expressing excitement over this powerful new tool and wariness toward its potential for misuse,” the study authors concluded.

The study provides an interesting historical analysis of public discourse about ChatGPT. However, it is worth noting that the study focused solely on English-language tweets, while much of the broader discussion occurred outside of Twitter and in non-English languages.

The paper, “Powerful tool or too powerful? Early public discourse about ChatGPT across 4 million tweets,” was authored by Reuben Ng and Ting Yu Joanne Chow.

RELATED

Mind captioning: This scientist just used AI to translate brain activity into text
Artificial Intelligence

Scientists tested AI’s moral compass, and the results reveal a key blind spot

May 8, 2026
Scientists show how common chord progressions unlock social bonding in the brain
Artificial Intelligence

Perpetrators of AI sexual abuse often view their actions as a joke, new research shows

May 7, 2026
AI outshines humans in humor: Study finds ChatGPT is as funny as The Onion
Artificial Intelligence

Conversational AI shows promise in easing symptoms of anxiety and depression

May 6, 2026
The surprising link between conspiracy mentality and deepfake detection ability
Artificial Intelligence

Deepfake videos degrade political reputations even when viewers realize they are fake

May 5, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

Turning to chatbots when lonely may exacerbate feelings of loneliness, study finds

May 4, 2026
Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age
Artificial Intelligence

Study explores how virtual “girlfriend experiences” tap evolved relationship motivations in the digital age

May 3, 2026
People cannot tell AI-generated from human-written poetry and they like AI poetry more
Artificial Intelligence

Fascinating new research suggests artificial neurodivergence could help solve the AI alignment problem

May 1, 2026
Gold digging is strongly linked to psychopathy and dark personality traits, study finds
Artificial Intelligence

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

April 30, 2026

Follow PsyPost

The latest research, however you prefer to read it.

Daily newsletter

One email a day. The newest research, nothing else.

Google News

Get PsyPost stories in your Google News feed.

Add PsyPost to Google News
RSS feed

Use your favorite reader. We also syndicate to Apple News.

Copy RSS URL
Social media
Support independent science journalism

Ad-free reading, full archives, and weekly deep dives for members.

Become a member

Trending

  • The human brain appears to rely heavily on the thighs to accurately judge female body size
  • Fox News viewership linked to belief in a racist conspiracy theory
  • What your personality traits reveal about your sexual fantasies
  • Both men and women view a partner’s financial investment in a rival as a major relationship threat
  • Brain scans of 800 incarcerated men link psychopathy to an expanded cortical surface area

Science of Money

  • What traders actually look at: Eye-tracking study finds the price chart is largely ignored
  • When ICE ramps up, U.S.-born workers don’t fill the gap, study finds
  • Why a blue background can make a brown sofa look bigger
  • Why brand names like “Yum Yum” and “BonBon” taste sweeter to our brains
  • How the science of persuasion connects to B2B sales success

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc