Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

by Eric W. Dolan
May 27, 2025
in Artificial Intelligence
[Adobe Stock]

[Adobe Stock]

Share on TwitterShare on Facebook
Don't miss out! Follow PsyPost on Bluesky!

A study published in the journal Education and Information Technologies finds that students who are more conscientious tend to use generative AI tools like ChatGPT less frequently, and that using such tools for academic tasks is associated with lower self-efficacy, worse academic performance, and greater feelings of helplessness. The findings highlight the psychological dynamics behind AI adoption and raise questions about how it may shape students’ learning and motivation.

Generative AI refers to computer systems that can create original content in response to user prompts. Large language models, such as ChatGPT, are a common example. These tools can produce essays, summaries, explanations, and even simulate conversation—making them attractive to students looking for quick help with academic tasks. But their rise has also sparked debate among educators, who are concerned about plagiarism, reduced learning, and the ethical use of AI in classrooms.

“Witnessing excessive reliance among some of my students on generative AI tools like ChatGPT made me wonder whether these tools had implications for students’ long term learning outcomes and their cognitive capacity,” said study author Sundas Azeem, an assistant professor of management and organizational behavior at SZABIST University.

“It was particularly evident that for those activities and tasks where students relied on generative AI tools, classroom participation and debate was considerably lower as similar responses from these tools increased student agreement on topics of discussion. With reduced engagement in class, these observations sparked my concern whether learning goals were actually being met.

“At the time we started this study, most studies on students’ use of generative AI were either opinion-based or theoretical, exploring the ethics of generative AI use,” Azeem continued. “The studies exploring academic performance seldom considered academic grades (CGPA) for academic outcomes, and also ignored individual differences such as personality traits.

“Despite the widespread use, not all students relied on generative AI use equally. Students who who were otherwise more responsible, punctual, and participative in class seemed to rely lesser on generative AI tools. This led me to investigate if there were personality differences in the use of these tools. This gap, coupled with rising concerns about fairness in grading and academic integrity inspired me for this study.”

To explore how students are actually engaging with generative AI, and how their personality traits influence this behavior, the researchers surveyed 326 undergraduate students from three major universities in Pakistan. The students were enrolled in business-related programs and spanned from their second to eighth semester. Importantly, the study used a three-wave, time-lagged survey design to gather data over time and minimize common biases in self-reported responses.

At the first time point, students reported their personality traits and perceptions of fairness in their university’s grading system. Specifically, the researchers focused on three personality traits from the Big Five model: conscientiousness, openness to experience, and neuroticism. These traits were selected because of their relevance to academic performance and technology use. For example, conscientious students tend to be organized, self-disciplined, and achievement-oriented. Openness reflects intellectual curiosity and creativity, while neuroticism is associated with anxiety and emotional instability.

In the second time point, participants reported how frequently they used generative AI tools—especially ChatGPT—for academic purposes. In the third and final wave, students completed measures assessing their academic self-efficacy (how capable they felt of succeeding academically), their experience of learned helplessness (a belief that efforts won’t lead to success), and reported their cumulative grade point average.

Among the three personality traits studied, only conscientiousness was significantly linked to AI use. Students who scored higher in conscientiousness were less likely to use generative AI for academic work. This finding suggests that conscientious individuals may prefer to rely on their own efforts and are less inclined to take shortcuts, aligning with prior research showing that this personality trait is associated with academic honesty and self-directed learning.

“Our study found that students who are more conscientious are less likely to rely on generative AI for academic tasks due to higher self-discipline and perhaps also higher ethical standards,” Azeem told PsyPost. “They may prefer exploring multiple sources of information and other more cognitively engaging learning activities like researching and discussions.”

Contrary to expectations, openness to experience and neuroticism were not significantly related to AI use. While previous research has linked openness to a greater willingness to try new technologies, the researchers suggest that students high in openness may also value originality and independent thought, potentially reducing their reliance on AI-generated content. Similarly, students high in neuroticism may feel uneasy about the accuracy or ethics of AI tools, leading to ambivalence about their use.

The researchers also examined how perceptions of fairness in grading might shape these relationships. But only one interaction—between openness and grading fairness—was marginally significant. For students high in openness, perceiving the grading system as fair was associated with lower AI use. The researchers did not find significant interactions involving conscientiousness or neuroticism.

“One surprising finding was that fairness in grading only marginally influenced generative AI use, and only for the personality trait openness to experience, showing that regardless of grading fairness, generative AI is gaining widespread popularity,” Azeem said. “This is telling, given that we had anticipated students would rely more on generative AI tools with an aim to score higher grades, when they perceived grading was unfair. Also, while individuals high in openness to experience are generally early adopters of technologies our study reported no such findings.”

More broadly, the researchers found that greater use of generative AI in academic tasks was associated with several negative outcomes. Students who relied more heavily on AI reported lower academic self-efficacy. In other words, they felt less capable of succeeding on their own. They also experienced greater feelings of learned helplessness—a state in which individuals believe that effort is futile and outcomes are beyond their control. Additionally, higher AI use was linked to slightly lower academic performance as measured by GPA.

These patterns suggest that while generative AI may offer short-term convenience, its overuse could undermine students’ sense of agency and reduce their motivation to engage deeply with their coursework. Over time, this reliance might erode critical thinking and problem-solving skills that are essential for long-term success.

Further analysis revealed that the use of generative AI also mediated the link between conscientiousness and academic outcomes. Specifically, students who were more conscientious were less likely to use AI, and this lower use was associated with better academic performance, greater self-efficacy, and less helplessness.

“A key takeaway for students, teachers, as well as academic leadership is the impact of students’ reliance on generative AI tools on their psychological and learning outcomes,” Azeem told PsyPost. “For example, our findings that generative AI use is associated with reduced academic self-efficacy and higher learned helplessness are concerning as students may start believing that their own efforts do not matter. This may lead to reduced agency where they believe that academic success is dependent on external tools rather than internal competence. As the overuse of generative AI erodes self-efficacy, students may doubt their ability to complete assignments or challenging problems without the help of AI. This may make students passive learners, hesitating to attempt tasks without support.

“When they feel less in control or doubt themselves for a long time, it may lead to distorted learning habits as they may believe generative AI will always provide the answer. This may may also make academic tasks boring rather than challenging, further stunting resilience and intellectual growth. Our findings imply that while generative AI is here to stay, its responsible integration into academia through policy making as well as teacher and student training is key to its effective outcomes.”

“Our findings did not support the common idea that generative AI tools help perform better academically,” Azeem explained. “This makes sense given our findings that generative AI use increases learned helplessness. Academic performance (indicated by CGPA in our study) relies more on individual cognitive abilities and subject knowledge, which may be adversely affected with reduced academic self-efficacy. Accordingly, teachers, students, as well as the general public should exercise caution in relying on generative AI tools excessively.”

The study — like all research — include some limitations. The sample was limited to business students from Pakistani universities, which may limit the generalizability of the findings to other cultures or academic disciplines. The researchers relied on self-reported measures, though they took steps to reduce bias by spacing out the surveys and using established scales.

“The self-reported data may be susceptible to social desirability bias,” Azeem noted. “In addition, while our study followed a time-lagged design that enables temporal separation between data collection, causal directions between generative AI use and its outcomes can be better mapped through a longitudinal design. Likewise, in order to design necessary interventions and training plans, it may help future studies to investigate conditions under which generative AI use leads to more positive and less negative learning outcomes.”

“In the long term, I aim to conduct longitudinal studies that investigates long-term student development like creativity, self-regulation, and employability over multiple semesters. This may help bridge the emerging differences in literature regarding the positive versus harmful effects of generative AI for students. I also intend to explore other motivational traits besides personality, that may influence generative AI use. Perhaps this stream of studies may empower me to design interventions for integrating AI literacy and ethical reasoning for effective generative AI use among students in the long run.”

The findings raise larger questions about the future of education in an era of accessible, powerful AI. If generative tools can complete many academic tasks with minimal effort, students may miss out on learning processes that build confidence, resilience, and critical thinking. On the other hand, AI tools could also be used to support learning, for example, by helping students brainstorm, explore new perspectives, or refine their writing.

“While our study alarms us to the potential adverse effects of generative AI for students, literature is also available supporting its positive outcomes,” Azeem said. “Therefore, as AI tools become increasingly embedded in education, it is vital that policy makers, educators, and edtech developers go beyond binary views of generative AI as either inherently good or bad. I believe that guiding responsible use of generative AI while mitigating risks holds the key to enhanced learning.”

“To be specific, instructor training for designing AI-augmented learning activities can help foster critical thinking. These can emphasize encouraging student reflection on AI-generated content in order to address some caveats of generative AI use in the classroom. Likewise, promoting fair and transparent grading systems may reduce incentives for misuse. With unchecked and unregulated use of generative AI among students, learned helplessness is likely to become prevalent. This may impair the very capacities that education is intended to develop: independence, critical thinking, and curiosity. Amid all the buzz of educational technology, our study emphasizes that technology adoption is as much a psychological issue, as it is a technological and ethical one.”

The study, “Personality correlates of academic use of generative artificial intelligence and its outcomes: does fairness matter?,” was authored by Sundas Azeem and Muhammad Abbas.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Frequent egg consumption linked to lower risk of Alzheimer’s dementia, study finds

Psychopathic personality and weak impulse control pair up to predict teen property crime

Low sexual activity, body shape, and mood may combine in ways that shorten lives, new study suggests

Highly irritable teens are more likely to bully others, but anxiety mitigates this tendency

Neuroscientists identify brain pathway that prioritizes safety over other needs

Liberals and conservatives live differently — but people think the divide is even bigger than it is

Neuroscientists shed new light on how heroin disrupts prefrontal brain function

New research identifies four distinct health pathways linked to Alzheimer’s disease

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy