Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI revolution or ethical dilemma? What 4.2 million tweets reveal about public perception of ChatGPT

by Vladimir Hedrih
October 5, 2024
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay informed on the latest psychology and neuroscience research—follow PsyPost on LinkedIn for daily updates and insights.

An analysis of millions of English-language tweets discussing ChatGPT in the first three months after its launch revealed that while the general public expressed excitement over this powerful new tool, there was also concern about its potential for misuse. Negative opinions raised questions about its credibility, possible biases, ethical issues, and concerns related to the employment rights of data annotators and programmers. On the other hand, positive views highlighted excitement about its potential use in various fields. The paper was published in PLOS ONE.

ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user input. It was introduced to the public in November 2022 as part of the GPT-3.5 architecture and later enhanced with versions like GPT-4. ChatGPT can perform various tasks, such as answering questions, providing explanations, generating text, offering advice, and assisting with problem-solving. It uses deep learning techniques to predict the most relevant responses, enabling it to engage in interactive conversations on a wide range of topics. The model was trained on large datasets, including books, articles, and online content, which allow it to generate coherent and contextually appropriate responses.

While useful in many areas, ChatGPT has limitations, such as occasionally providing inaccurate or biased information or generating completely fabricated responses (known as AI hallucinations). It has been applied in diverse fields, such as education, customer service, and content creation.

When ChatGPT was first introduced, its popularity soared, with its user base reaching 100 million individuals in the first month. Since then, many new AI language models have been developed by various companies. However, it could be argued that ChatGPT sparked the AI revolution in the workplace, generating widespread discussions about AI and prompting people to form varied opinions about its impact.

Researchers Reuben Ng and Ting Yu Joanne Chow aimed to analyze the enthusiasm and emotions surrounding the initial public perceptions of ChatGPT. They examined a dataset containing 4.2 million tweets that mentioned ChatGPT as a keyword, published between December 1, 2022, and March 1, 2023—essentially, the first three months after ChatGPT’s launch. The researchers sought to identify the issues and themes most frequently discussed and the most commonly used keywords and sentiments in tweets about ChatGPT.

The study analyzed the dataset in two ways. First, the researchers focused on identifying significant spikes in Twitter activity, or periods when the number of tweets, replies, and retweets about ChatGPT was notably high, and they analyzed what users were saying during those times. They collected and analyzed the top 100 most-engaged tweets from these periods. Second, they identified the top keywords each week that expressed positive, neutral, or negative sentiments about ChatGPT.

The results showed that there were 23 peaks in Twitter activity during the study period. The first peak occurred when ChatGPT surpassed 5 million users just five days after its launch, reflecting both the initial buzz and hesitancy surrounding the new tool. The second peak was primarily focused on discussions about ChatGPT’s potential uses. Subsequent peaks explored its utility in academic settings, detection of bias, philosophical thought experiments, discussions of its moral permissibility, and its role as a mirror to humanity.

The analysis of keywords revealed that the most frequent negative terms expressed concerns about ChatGPT’s credibility (e.g., hallucinated, crazy loop, cognitive dissonance, limited knowledge, simple mistakes, overconfidence, misleading), implicit bias in generated responses (e.g., bias, misleading, political bias, wing bias, religious bias), environmental ethics (e.g., fossil fuels), the employment rights of data annotators (e.g., outsourced workers, investigation), and adjacent debates about whether using a neural network trained on existing human works is ethical (e.g., stolen artwork, minimal effort).

Positive and neutral keywords expressed excitement about the general possibilities (e.g., huge breakthrough, biggest tech innovation), particularly in coding (e.g., good debugging companion, insanely useful, code), as a creative tool (e.g., content creation superpower, copywriters), in education (e.g., lesson plans, essays, undergraduate paper, academic purposes, grammar checker), and for personal use (e.g., workout plan, meal plan, calorie targets, personalized meeting templates).

“Overall, sentiments and themes were double-edged, expressing excitement over this powerful new tool and wariness toward its potential for misuse,” the study authors concluded.

The study provides an interesting historical analysis of public discourse about ChatGPT. However, it is worth noting that the study focused solely on English-language tweets, while much of the broader discussion occurred outside of Twitter and in non-English languages.

The paper, “Powerful tool or too powerful? Early public discourse about ChatGPT across 4 million tweets,” was authored by Reuben Ng and Ting Yu Joanne Chow.

RELATED

AI-generated conversation with ChatGPT about mental health and psychology.
Artificial Intelligence

Most people rarely use AI, and dark personality traits predict who uses it more

October 12, 2025
AI-powered mental health app showcasing its interface on a mobile device.
Artificial Intelligence

Interaction with the Replika social chatbot can alleviate loneliness, study finds

October 11, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

Startling study finds people overtrust AI-generated medical advice

October 10, 2025
Vivid close-up of a brown human eye showing intricate iris patterns and details.
ADHD

Scientists use AI to detect ADHD through unique visual rhythms in groundbreaking study

October 10, 2025
Do chatbots fill a social void? Research examines their role for lonely teens
Artificial Intelligence

An AI chatbot’s feedback style can alter your brain activity during learning

October 9, 2025
Psilocybin-assisted group therapy may help reduce depression and burnout among healthcare workers
Artificial Intelligence

Just a few chats with a biased AI can alter your political opinions

October 4, 2025
AI chatbots often misrepresent scientific studies — and newer models may be worse
Artificial Intelligence

AI chatbots give inconsistent responses to suicide-related questions, study finds

September 29, 2025
Study reveals AI’s potential to detect loneliness by deciphering speech patterns
Artificial Intelligence

People are more likely to act dishonestly when delegating tasks to AI

September 26, 2025

STAY CONNECTED

LATEST

Elon Musk’s political persona linked to waning interest in Teslas among liberals

Prenatal exposure to common “forever chemicals” linked to changes in children’s brain structure

Long-term ayahuasca use linked to distinct emotional brain activity and higher resilience

Negativity drives engagement on political TikTok

Lower IQ in youth predicts higher alcoholism risk in adulthood

Cannabidiol may ease Alzheimer’s-related brain inflammation and improve cognition

This happens in your brain when you change your mind, according to neuroscience

Vegetarians tend to value achievement and power more than meat-eaters, study finds

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy