As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.
Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.
Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.
A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.
A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.
A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.
A new study using data from over 11,000 adolescents found that sleep disturbances were the most powerful predictor of future mental health problems—more so than trauma or family history. AI models based on questionnaires outperformed those using brain scans.
A new study shows that when workers feel threatened by artificial intelligence, they tend to highlight creativity—rather than technical or social skills—in job applications and education choices. The research suggests people see creativity as a uniquely human skill machines can’t...
Can artificial intelligence truly “understand” language the way humans do? A neuroscientist challenges this popular belief, arguing that machines may generate convincing text—but they lack the emotional, contextual, and biological grounding that gives real meaning to human communication.
OpenAI’s GPT-4o demonstrated behavior resembling cognitive dissonance in a psychological experiment. After writing essays about Vladimir Putin, the AI changed its evaluations—especially when it thought it had freely chosen which argument to make, echoing patterns seen in people.
A new study highlights cultural differences in attitudes toward AI companionship. East Asian participants were more open to emotionally connecting with chatbots, a pattern linked to greater anthropomorphism and differing exposure to social robots across regions.
New research reveals a surprising downside to AI transparency: people who admit to using AI at work are seen as less trustworthy. Across 13 experiments, disclosing AI use consistently reduced credibility—even among tech-savvy evaluators and in professional contexts.
A new study suggests that conscientious students are less likely to use generative AI tools like ChatGPT and that this may work in their favor. Frequent AI users reported lower grades, weaker academic confidence, and greater feelings of helplessness.
Researchers developed a large-scale system that detects political bias in web-based news outlets by examining topic selection, tone, and coverage patterns. The AI tool offers transparency and accuracy—even outperforming large language models.
A new study published in Acta Psychologica reveals that people’s judgments about whether a face is real or AI-generated are influenced by facial attractiveness and personality traits such as narcissism and honesty-humility—even when all the images are of real people.