Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI revolution or ethical dilemma? What 4.2 million tweets reveal about public perception of ChatGPT

by Vladimir Hedrih
October 5, 2024
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

An analysis of millions of English-language tweets discussing ChatGPT in the first three months after its launch revealed that while the general public expressed excitement over this powerful new tool, there was also concern about its potential for misuse. Negative opinions raised questions about its credibility, possible biases, ethical issues, and concerns related to the employment rights of data annotators and programmers. On the other hand, positive views highlighted excitement about its potential use in various fields. The paper was published in PLOS ONE.

ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user input. It was introduced to the public in November 2022 as part of the GPT-3.5 architecture and later enhanced with versions like GPT-4. ChatGPT can perform various tasks, such as answering questions, providing explanations, generating text, offering advice, and assisting with problem-solving. It uses deep learning techniques to predict the most relevant responses, enabling it to engage in interactive conversations on a wide range of topics. The model was trained on large datasets, including books, articles, and online content, which allow it to generate coherent and contextually appropriate responses.

While useful in many areas, ChatGPT has limitations, such as occasionally providing inaccurate or biased information or generating completely fabricated responses (known as AI hallucinations). It has been applied in diverse fields, such as education, customer service, and content creation.

When ChatGPT was first introduced, its popularity soared, with its user base reaching 100 million individuals in the first month. Since then, many new AI language models have been developed by various companies. However, it could be argued that ChatGPT sparked the AI revolution in the workplace, generating widespread discussions about AI and prompting people to form varied opinions about its impact.

Researchers Reuben Ng and Ting Yu Joanne Chow aimed to analyze the enthusiasm and emotions surrounding the initial public perceptions of ChatGPT. They examined a dataset containing 4.2 million tweets that mentioned ChatGPT as a keyword, published between December 1, 2022, and March 1, 2023—essentially, the first three months after ChatGPT’s launch. The researchers sought to identify the issues and themes most frequently discussed and the most commonly used keywords and sentiments in tweets about ChatGPT.

The study analyzed the dataset in two ways. First, the researchers focused on identifying significant spikes in Twitter activity, or periods when the number of tweets, replies, and retweets about ChatGPT was notably high, and they analyzed what users were saying during those times. They collected and analyzed the top 100 most-engaged tweets from these periods. Second, they identified the top keywords each week that expressed positive, neutral, or negative sentiments about ChatGPT.

The results showed that there were 23 peaks in Twitter activity during the study period. The first peak occurred when ChatGPT surpassed 5 million users just five days after its launch, reflecting both the initial buzz and hesitancy surrounding the new tool. The second peak was primarily focused on discussions about ChatGPT’s potential uses. Subsequent peaks explored its utility in academic settings, detection of bias, philosophical thought experiments, discussions of its moral permissibility, and its role as a mirror to humanity.

The analysis of keywords revealed that the most frequent negative terms expressed concerns about ChatGPT’s credibility (e.g., hallucinated, crazy loop, cognitive dissonance, limited knowledge, simple mistakes, overconfidence, misleading), implicit bias in generated responses (e.g., bias, misleading, political bias, wing bias, religious bias), environmental ethics (e.g., fossil fuels), the employment rights of data annotators (e.g., outsourced workers, investigation), and adjacent debates about whether using a neural network trained on existing human works is ethical (e.g., stolen artwork, minimal effort).

Positive and neutral keywords expressed excitement about the general possibilities (e.g., huge breakthrough, biggest tech innovation), particularly in coding (e.g., good debugging companion, insanely useful, code), as a creative tool (e.g., content creation superpower, copywriters), in education (e.g., lesson plans, essays, undergraduate paper, academic purposes, grammar checker), and for personal use (e.g., workout plan, meal plan, calorie targets, personalized meeting templates).

“Overall, sentiments and themes were double-edged, expressing excitement over this powerful new tool and wariness toward its potential for misuse,” the study authors concluded.

The study provides an interesting historical analysis of public discourse about ChatGPT. However, it is worth noting that the study focused solely on English-language tweets, while much of the broader discussion occurred outside of Twitter and in non-English languages.

The paper, “Powerful tool or too powerful? Early public discourse about ChatGPT across 4 million tweets,” was authored by Reuben Ng and Ting Yu Joanne Chow.

RELATED

Scientists shocked to find AI’s social desirability bias “exceeds typical human standards”
Artificial Intelligence

A simple language switch can make AI models behave significantly differently

January 23, 2026
LLM red teamers: People are hacking AI chatbots just for fun and now researchers have catalogued 35 “jailbreak” techniques
Artificial Intelligence

Are you suffering from “cognitive atrophy” due to AI overuse?

January 22, 2026
Scientists reveal atypical depression is a distinct biological subtype linked to antidepressant resistance
Artificial Intelligence

Researchers are using Dungeons & Dragons to find the breaking points of major AI models

January 22, 2026
Groundbreaking AI model uncovers hidden patterns of political bias in online news
Artificial Intelligence

AI chatbots tend to overdiagnose mental health conditions when used without structured guidance

January 22, 2026
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Sycophantic chatbots inflate people’s perceptions that they are “better than average”

January 19, 2026
Google searches for racial slurs are higher in areas where people are worried about disease
Artificial Intelligence

Learning from AI summaries leads to shallower knowledge than web search

January 17, 2026
Neuroscientists find evidence meditation changes how fluid moves in the brain
Artificial Intelligence

Scientists show humans can “catch” fear from a breathing robot

January 16, 2026
Poor sleep may shrink brain regions vulnerable to Alzheimer’s disease, study suggests
Artificial Intelligence

How scientists are growing computers from human brain cells – and why they want to keep doing it

January 11, 2026

PsyPost Merch

STAY CONNECTED

LATEST

Popular lyrics keep getting darker and dumber, but there was a surprising shift during the first Trump presidency

Genetic factors likely confound the link between c-sections and offspring mental health

Major new study finds psilocybin microdoses improve the quality of creative ideas but not the quantity

Donald Trump weaponizes humor through “dark play” to test boundaries

Severe sleep problems is associated with fewer years of healthy brain function

Childhood adversity linked to accelerated biological aging in women, new study finds

People in romantic relationships who show a high-K fitness profile are more likely to be “good” patients

General anxiety predicts conspiracy beliefs while political anxiety does not

RSS Psychology of Selling

  • New research links faking emotions to higher turnover in B2B sales
  • How defending your opinion changes your confidence
  • The science behind why accessibility drives revenue in the fashion sector
  • How AI and political ideology intersect in the market for sensitive products
  • Researchers track how online shopping is related to stress
         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy