Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

ChatGPT and academic cheating: Researchers identify personality traits linked to misuse of AI

by Eric W. Dolan
May 18, 2024
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook

A recent study published in the journal Heliyon explored the connection between personality traits and students’ willingness to use advanced language models, such as ChatGPT, to generate academic texts without acknowledging the source. The findings indicate that specific personality traits play a significant role in predicting whether students are likely to engage in this form of academic misconduct.

Large language models are advanced artificial intelligence systems designed to understand and generate human-like text. These models are built using deep learning techniques and are trained on vast amounts of textual data from the internet, enabling them to predict and generate coherent and contextually relevant responses to a wide range of prompts. By analyzing patterns in the data, these models can produce text that mimics human writing with high accuracy.

ChatGPT, developed by OpenAI, is one of the most prominent examples of these models. It was released in November 2022 and quickly gained popularity, reaching over 100 million users by January 2023. Its ability to generate text that is difficult to distinguish from human-generated content has raised concerns about its potential misuse, particularly in academic settings. Students might use such models to produce assignments with minimal effort, undermining the educational process and compromising academic integrity.

The researchers conducted their study to explore the relationship between personality traits and the likelihood of students using chatbot-generated texts for academic cheating. The study focused on two sets of personality traits: the HEXACO model and the Dark Triad. The HEXACO model includes broad dimensions of personality, such as Honesty-Humility and Conscientiousness, which are known to be associated with ethical behavior. The Dark Triad, comprising narcissism, Machiavellianism, and psychopathy, represents traits linked to manipulative and self-centered tendencies.

Participants were 283 university students from Austria, who completed an online survey assessing their HEXACO and Dark Triad personality traits. Participants were also informed about Chat GPT and its capabilities, including an example of a text generated by the model. They were then asked about their willingness to use such texts for their seminar papers without acknowledging the source, a behavior considered as academic cheating. To gauge their intentions, participants responded to statements like “I might consider using AI-generated texts for my seminar papers in the future” and rated the percentage of AI-generated text they could imagine using.

Additionally, the survey included questions to assess participants’ perceptions of the quality of chatbot-generated texts. This helped differentiate between ethical concerns and quality concerns as motivations for using or avoiding these texts.

Students who scored high in Honesty-Humility, a trait characterized by sincerity, fairness, and modesty, were less likely to engage in academic cheating. Similarly, high scores in Conscientiousness, which reflects traits like diligence, carefulness, and a strong work ethic, were associated with a lower intention to use chatbot-generated texts unethically. These results align with previous research that shows individuals with these traits are generally more ethical and rule-following.

Contrary to the researchers’ hypothesis, Openness to Experience was also negatively related to the intention to use chatbot-generated texts. Initially, it was thought that individuals high in this trait might be more willing to experiment with new technologies like Chat GPT.

Google News Preferences Add PsyPost to your preferred sources

However, the findings suggest that those with high Openness to Experience prefer to tackle academic challenges with their own original ideas rather than relying on AI-generated content. This could be due to their intrinsic curiosity and creativity, leading them to engage more deeply with their work.

All three traits of the Dark Triad were positively related to the intention to use chatbot-generated texts. Students with high levels of these traits, characterized by manipulative, self-centered, and unemotional behavior, were more likely to consider using AI-generated texts for academic cheating. This suggests that individuals with these traits prioritize personal gain over ethical considerations, viewing AI tools as a means to achieve their goals dishonestly.

An interesting aspect of the study was the role of perceived quality of chatbot-generated texts. The researchers found that the perceived quality of these texts was positively related to the intention to use them. Importantly, even after controlling for perceived quality, the significant relationships between personality traits and the intention to use chatbot-generated texts remained.

This study highlights the significant role of personality traits in predicting the intention to use chatbot-generated texts for academic cheating. But there are some limitations to consider. For instance, the study was conducted at a single university, which may limit the generalizability of the findings. The study also relied on self-reported data, which is susceptible to social desirability bias, where participants might provide answers they believe are socially acceptable rather than their true intentions.

“As the educational landscape continues to evolve, it is imperative to proactively address the challenges posed by technological advancements. Understanding the determinants of an individual’s willingness or reluctance to engage in academic cheating with the help of AI language models assists in formulating strategies to mitigate these risks effectively,” the researchers concluded. “By considering the findings of this study, educational institutions, policymakers, and relevant stakeholders can develop interventions, guidelines, and educational programs to raise awareness about the responsible use of AI language models, foster a culture of academic integrity, and discourage the misuse of these technologies for dishonest purposes.”

The study, “HEXACO, the Dark Triad, and Chat GPT: Who is willing to commit academic cheating?” was authored by Tobias Greitemeyer and Andreas Kastenmüller.

Previous Post

Whole-body hyperthermia shows promising antidepressant effects through anti-inflammatory pathways

Next Post

Perceived reputation damage predicts to increased depression in honor-endorsing individuals

RELATED

Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage
Artificial Intelligence

Why most people fail to spot AI-generated faces, while super-recognizers have a subtle advantage

February 28, 2026
People with social anxiety more likely to become overdependent on conversational artificial intelligence agents
Artificial Intelligence

AI therapy is rated higher for empathy until people learn a machine wrote the text

February 26, 2026
New research: AI models tend to reflect the political ideologies of their creators
Artificial Intelligence

New research: AI models tend to reflect the political ideologies of their creators

February 26, 2026
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

AI and mental health: New research links use of ChatGPT to worsened psychiatric symptoms

February 24, 2026
Stanford scientist discovers that AI has developed an uncanny human-like ability
Artificial Intelligence

How personality and culture relate to our perceptions of artificial intelligence

February 23, 2026
Young children are more likely to trust information from robots over humans
Artificial Intelligence

The presence of robot eyes affects perception of mind

February 21, 2026
Psychology study reveals a fascinating fact about artwork
Artificial Intelligence

AI art fails to trigger the same empathy as human works

February 20, 2026
ChatGPT’s social trait judgments align with human impressions, study finds
Artificial Intelligence

AI chatbots generate weight loss coaching messages perceived as helpful as human-written advice

February 16, 2026

STAY CONNECTED

LATEST

People with the least political knowledge tend to be the most overconfident in their grasp of facts

How the wording of a trigger warning changes our psychological response

Dating and breakups take a heavy emotional toll on adolescent mental health

Abortion stigma persists at moderate levels in high-income countries

Brain scans reveal two distinct physical subtypes of ADHD

Employees who feel attractive are more likely to share ideas at work

New psychology research reveals that wisdom acts as a moral compass for creative thinking

Long-term ADHD medication use does not appear to permanently alter the developing brain

PsyPost is a psychology and neuroscience news website dedicated to reporting the latest research on human behavior, cognition, and society. (READ MORE...)

  • Mental Health
  • Neuroimaging
  • Personality Psychology
  • Social Psychology
  • Artificial Intelligence
  • Cognitive Science
  • Psychopharmacology
  • Contact us
  • Disclaimer
  • Privacy policy
  • Terms and conditions
  • Do not sell my personal information

(c) PsyPost Media Inc

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy

(c) PsyPost Media Inc