Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

AI revolution or ethical dilemma? What 4.2 million tweets reveal about public perception of ChatGPT

by Vladimir Hedrih
October 5, 2024
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Stay on top of the latest psychology findings: Subscribe now!

An analysis of millions of English-language tweets discussing ChatGPT in the first three months after its launch revealed that while the general public expressed excitement over this powerful new tool, there was also concern about its potential for misuse. Negative opinions raised questions about its credibility, possible biases, ethical issues, and concerns related to the employment rights of data annotators and programmers. On the other hand, positive views highlighted excitement about its potential use in various fields. The paper was published in PLOS ONE.

ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user input. It was introduced to the public in November 2022 as part of the GPT-3.5 architecture and later enhanced with versions like GPT-4. ChatGPT can perform various tasks, such as answering questions, providing explanations, generating text, offering advice, and assisting with problem-solving. It uses deep learning techniques to predict the most relevant responses, enabling it to engage in interactive conversations on a wide range of topics. The model was trained on large datasets, including books, articles, and online content, which allow it to generate coherent and contextually appropriate responses.

While useful in many areas, ChatGPT has limitations, such as occasionally providing inaccurate or biased information or generating completely fabricated responses (known as AI hallucinations). It has been applied in diverse fields, such as education, customer service, and content creation.

When ChatGPT was first introduced, its popularity soared, with its user base reaching 100 million individuals in the first month. Since then, many new AI language models have been developed by various companies. However, it could be argued that ChatGPT sparked the AI revolution in the workplace, generating widespread discussions about AI and prompting people to form varied opinions about its impact.

Researchers Reuben Ng and Ting Yu Joanne Chow aimed to analyze the enthusiasm and emotions surrounding the initial public perceptions of ChatGPT. They examined a dataset containing 4.2 million tweets that mentioned ChatGPT as a keyword, published between December 1, 2022, and March 1, 2023—essentially, the first three months after ChatGPT’s launch. The researchers sought to identify the issues and themes most frequently discussed and the most commonly used keywords and sentiments in tweets about ChatGPT.

The study analyzed the dataset in two ways. First, the researchers focused on identifying significant spikes in Twitter activity, or periods when the number of tweets, replies, and retweets about ChatGPT was notably high, and they analyzed what users were saying during those times. They collected and analyzed the top 100 most-engaged tweets from these periods. Second, they identified the top keywords each week that expressed positive, neutral, or negative sentiments about ChatGPT.

The results showed that there were 23 peaks in Twitter activity during the study period. The first peak occurred when ChatGPT surpassed 5 million users just five days after its launch, reflecting both the initial buzz and hesitancy surrounding the new tool. The second peak was primarily focused on discussions about ChatGPT’s potential uses. Subsequent peaks explored its utility in academic settings, detection of bias, philosophical thought experiments, discussions of its moral permissibility, and its role as a mirror to humanity.

The analysis of keywords revealed that the most frequent negative terms expressed concerns about ChatGPT’s credibility (e.g., hallucinated, crazy loop, cognitive dissonance, limited knowledge, simple mistakes, overconfidence, misleading), implicit bias in generated responses (e.g., bias, misleading, political bias, wing bias, religious bias), environmental ethics (e.g., fossil fuels), the employment rights of data annotators (e.g., outsourced workers, investigation), and adjacent debates about whether using a neural network trained on existing human works is ethical (e.g., stolen artwork, minimal effort).

Positive and neutral keywords expressed excitement about the general possibilities (e.g., huge breakthrough, biggest tech innovation), particularly in coding (e.g., good debugging companion, insanely useful, code), as a creative tool (e.g., content creation superpower, copywriters), in education (e.g., lesson plans, essays, undergraduate paper, academic purposes, grammar checker), and for personal use (e.g., workout plan, meal plan, calorie targets, personalized meeting templates).

“Overall, sentiments and themes were double-edged, expressing excitement over this powerful new tool and wariness toward its potential for misuse,” the study authors concluded.

The study provides an interesting historical analysis of public discourse about ChatGPT. However, it is worth noting that the study focused solely on English-language tweets, while much of the broader discussion occurred outside of Twitter and in non-English languages.

The paper, “Powerful tool or too powerful? Early public discourse about ChatGPT across 4 million tweets,” was authored by Reuben Ng and Ting Yu Joanne Chow.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails
Teen depression tied to balance of adaptive and maladaptive emotional strategies, study finds
Artificial Intelligence

Sleep problems top list of predictors for teen mental illness, AI-powered study finds

June 15, 2025

A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.

Read moreDetails
New research links certain types of narcissism to anti-immigrant attitudes
Artificial Intelligence

Fears about AI push workers to embrace creativity over coding, new research suggests

June 13, 2025

A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t replace.

Read moreDetails
Smash or pass? AI could soon predict your date’s interest via physiological cues
Artificial Intelligence

A neuroscientist explains why it’s impossible for AI to “understand” language

June 12, 2025

Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT mimics human cognitive dissonance in psychological experiments, study finds

June 3, 2025

OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.

Read moreDetails
Generative AI simplifies science communication, boosts public trust in scientists
Artificial Intelligence

East Asians more open to chatbot companionship than Westerners

May 30, 2025

A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

New study reveals how MDMA rewires serotonin and oxytocin systems in the brain

Ghosting and ‘breadcrumbing’: the psychological impact of our bad behaviour on dating apps

Older adults who feel criticized by loved ones are more likely to develop depression

New study exposes gap between ADHD drug use and safety research in children

People who are more likely to die seem to care less about the future

Researchers identify neural mechanism behind memory prioritization

Love addiction linked to memory and attention problems

Positive early experiences may buffer suicidal thoughts in those with trauma symptoms, new study finds

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy