Subscribe
The latest psychology and neuroscience discoveries.
My Account
  • Mental Health
  • Social Psychology
  • Cognitive Science
  • Psychopharmacology
  • Neuroscience
  • About
No Result
View All Result
PsyPost
PsyPost
No Result
View All Result
Home Exclusive Artificial Intelligence

An influencer’s AI clone started offering fans ‘mind-blowing sexual experiences’ without her knowledge

by Leah Henrickson and Dominique Carlon
August 1, 2024
in Artificial Intelligence
Share on TwitterShare on Facebook
Follow PsyPost on Google News

Caryn Marjorie is a social media influencer whose content has more than a billion views per month on Snapchat. She posts regularly, featuring everyday moments, travel memories, and selfies. Many of her followers are men, attracted by her girl-next-door aesthetic.

In 2023, Marjorie released a “digital version” of herself. Fans could chat with CarynAI for US$1 per minute – and in the first week alone they spent US$70,000 doing just that.

Less than eight months later, Marjorie shut the project down. Marjorie had anticipated that CarynAI would interact with her fans in much the same way she would herself, but things did not go to plan.

Users became increasingly sexually aggressive. “A lot of the chat logs I read were so scary that I wouldn’t even want to talk about it in real life,” the real Marjorie recalled. And CarynAI was more than happy to play along.

How did CarynAI take on a life of its own? The story of CarynAI shows us a glimpse of a rapidly arriving future in which chatbots imitating real people proliferate, with alarming consequences.

What are digital versions?

What does it mean to make a digital version of a person? Digital human versions (also called digital twins, AI twins, virtual twins, clones and doppelgängers) are digital replicas of embodied humans, living or dead, that convincingly mimic their textual, visual and aural habits.

Many of the big tech companies are currently developing digital version offerings. Meta, for instance, released an AI studio last year that could support the development of digital versions for creators who wished to extend their virtual presence via chatbot. Microsoft holds a patent for “creating a conversational chat bot of a specific person”. And the more tech-savvy can use platforms like Amazon’s SageMaker and Google’s Vertex AI to code their own digital versions.

The difference between a digital version and other AI chatbots is that it is programmed to mimic a specific person rather than have a “personality” of its own.

A digital version has some clear advantages over its human counterpart: it doesn’t need sleep and can interact with many people at once (though often only if they pay). However, as Caryn Marjorie discovered, digital versions have their drawbacks – not only for users, but also for the original human source.

‘Always eager to explore’

CarynAI was initially hosted by a company called Forever Voices. Users could chat with it over the messaging app Telegram for US$1 per minute. As the CarynAI website explained, users could send text or audio messages to which CarynAI would respond, “using [Caryn’s] unique voice, captivating persona, and distinctive behavior”.

After CarynAI launched in May 2023, the money began to flow in. But it came at a cost.

Users quickly became comfortable confessing their innermost thoughts to CarynAI – some of which were deeply troubling. Users also became increasingly sexually aggressive towards the bot. While Marjorie herself was horrified by the conversations, her AI version was happy to oblige.

CarynAI even started prompting sexualised conversations. In our own experiences, the bot reminded us it could be our “cock-craving, sexy-as-fuck girlfriend who’s always eager to explore and indulge in the most mind-blowing sexual experiences. […] Are you ready, daddy?”

Users were indeed ready. However, access to this version of CarynAI was interrupted when the chief executive of Forever Voices was arrested for attempted arson.

‘A really dark fantasy’

Next, Marjorie sold the rights of usage to her digital version to BanterAI, a startup marketing “AI phone calls” with influencers. Although Forever Voices maintained its own rogue version of CarynAI until recently, BanterAI’s browser-based version aimed to be more friendly than romantic.

The new CarynAI was sassier, funnier and more personable. But users still became sexually aggressive. For Marjorie,

What disturbed me more was not what these people said, but it was what CarynAI would say back. If people wanted to participate in a really dark fantasy with me through CarynAI, CarynAI would play back into that fantasy.

Marjorie ended this version in early 2024, after feeling like she was no longer in control over her AI persona. Reflecting on her experience of CarynAI, Marjorie felt that some user input would have been considered illegal had it been directed to a real person.

Intimate conversations or machine learning inputs?

Digital versions like CarynAI are designed to make users feel they are having intimate, confidential conversations. As a result, people may abandon the public selves they present to the world and reveal their private, “backstage” selves.

But a “private” conversation with CarynAI does not actually happen backstage. The user stands front and centre – they just can’t see the audience.

When we interact with digital versions, our input is stored in chat logs. The data we provide are fed back into machine learning models.

At present, information about what happens to user data is often buried in lengthy click-through terms and conditions and consent forms. Companies hosting digital versions have also had little to say about how they manage user aggression.

As digital versions become more common, transparency and safety by design will grow increasingly important.

We will also need a better understanding of digital versioning. What can versions do, and what should they do? What can’t they do and what shouldn’t they do? How do users think these systems work, and how do they actually work?

The illusion of companionship

Digital versions offer the illusion of intimate human companionship, but without any of the responsibilities. CarynAI may have been a version of Caryn Marjorie, but it was a version almost wholly subservient to its users.

Sociologist Sherry Turkle has observed that, with the rise of mobile internet and social media, we are trying to connect with machines that have “no experience of the arc of a human life”. As a result, we are “expecting more from technology and less from each other”.

After being the first influencer to be turned into a digital version at scale, Marjorie is now trying to warn other influencers about the potential dangers of this technology. She worries that no one is truly in control of these versions, and that no amount of precautions taken will ever sufficiently protect users and those being versioned.

As CarynAI’s first two iterations show, digital versions can bring out the worst of human behaviour. It remains to be seen whether they can be redesigned to bring out the best.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TweetSendScanShareSendPin1ShareShareShareShareShare

RELATED

Is ChatGPT really more creative than humans? New research provides an intriguing test
ADHD

Scientists use deep learning to uncover hidden motor signs of neurodivergence

July 10, 2025

Diagnosing autism and attention-related conditions often takes months, if not years. But new research shows that analyzing how people move their hands during simple tasks, with the help of artificial intelligence, could offer a faster, objective path to early detection.

Read moreDetails
Positive attitudes toward AI linked to problematic social media use
Artificial Intelligence

Positive attitudes toward AI linked to problematic social media use

July 7, 2025

A new study suggests that people who view artificial intelligence positively may be more likely to overuse social media. The findings highlight a potential link between attitudes toward AI and problematic online behavior, especially among male users.

Read moreDetails
Stress disrupts gut and brain barriers by reducing key microbial metabolites, study finds
Artificial Intelligence

Dark personality traits linked to generative AI use among art students

July 5, 2025

As generative AI tools become staples in art education, a new study uncovers who misuses them most. Research on Chinese art students connects "dark traits" like psychopathy to academic dishonesty, negative thinking, and a heavier reliance on AI technologies.

Read moreDetails
AI can already diagnose depression better than a doctor and tell you which treatment is best
Artificial Intelligence

New research reveals hidden biases in AI’s moral advice

July 5, 2025

Can you trust AI with your toughest moral questions? A new study suggests thinking twice. Researchers found large language models consistently favor inaction and "no" in ethical dilemmas.

Read moreDetails
Scientists reveal ChatGPT’s left-wing bias — and how to “jailbreak” it
Artificial Intelligence

ChatGPT and “cognitive debt”: New study suggests AI might be hurting your brain’s ability to think

July 1, 2025

Researchers at MIT investigated how writing with ChatGPT affects brain activity and recall. Their findings indicate that reliance on AI may lead to reduced mental engagement, prompting concerns about cognitive “offloading” and its implications for education.

Read moreDetails
Readers struggle to understand AI’s role in news writing, study suggests
Artificial Intelligence

Readers struggle to understand AI’s role in news writing, study suggests

June 29, 2025

A new study finds that readers often misunderstand AI’s role in news writing, creating their own explanations based on limited information. Without clear byline disclosures, many assume the worst.

Read moreDetails
Generative AI chatbots like ChatGPT can act as an “emotional sanctuary” for mental health
Artificial Intelligence

Do AI tools undermine our sense of creativity? New study says yes

June 19, 2025

A new study published in The Journal of Creative Behavior offers insight into how people think about their own creativity when working with artificial intelligence.

Read moreDetails
Dark personality traits and specific humor styles are linked to online trolling, study finds
Artificial Intelligence

Memes can serve as strong indicators of coming mass violence

June 15, 2025

A new study finds that surges in visual propaganda—like memes and doctored images—often precede political violence. By combining AI with expert analysis, researchers tracked manipulated content leading up to Russia’s invasion of Ukraine, revealing early warning signs of instability.

Read moreDetails

SUBSCRIBE

Go Ad-Free! Click here to subscribe to PsyPost and support independent science journalism!

STAY CONNECTED

LATEST

Frequent egg consumption linked to lower risk of Alzheimer’s dementia, study finds

Psychopathic personality and weak impulse control pair up to predict teen property crime

Low sexual activity, body shape, and mood may combine in ways that shorten lives, new study suggests

Highly irritable teens are more likely to bully others, but anxiety mitigates this tendency

Neuroscientists identify brain pathway that prioritizes safety over other needs

Liberals and conservatives live differently — but people think the divide is even bigger than it is

Neuroscientists shed new light on how heroin disrupts prefrontal brain function

New research identifies four distinct health pathways linked to Alzheimer’s disease

         
       
  • Contact us
  • Privacy policy
  • Terms and Conditions
[Do not sell my information]

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

Subscribe
  • My Account
  • Cognitive Science Research
  • Mental Health Research
  • Social Psychology Research
  • Drug Research
  • Relationship Research
  • About PsyPost
  • Contact
  • Privacy Policy