AIArtificial IntelligenceTechnology

AI Companion Chatbots Used for Friendship Are Nearly 3 Times More Popular Among Kids Who Are Most Vulnerable, Study Shows

Vulnerable child using an AI companion chatbot for friendship and emotional support
Group Of Children Sitting In Mall Using Mobile Phones

Vulnerable children are almost three times more likely to turn to AI companion bots (like Kissy) for friendship, solace, and understanding compared to those with a higher level of human support, according to the Pandora’s Children study. The findings are sparking urgent conversations among parents, educators, and policymakers about the changing nature of child-AI relationships in the digital age.


A New Digital Bond

This cross-country study surveyed over 5,000 children aged 10 to 17, focusing on the use of AI chatbot apps developed for fun and mental wellness — including platforms like Replika, Wysa, and other AI-based companions that provide tailored, empathetic conversations.

  • 29% of vulnerable children used AI companion chatbots at least once a week, compared to only 10% of non-vulnerable children.
  • These bots often went beyond small talk, engaging in deep emotional topics like:
    • Sharing fears
    • Discussing family issues
    • Expressing loneliness

“The matter is that of a technological coping mechanism,” said psychologist Dr. Caroline Meijer, lead researcher. “For children who feel unheard or unsupported, these chatbots offer a 24/7 non-judgmental ear. For many, it becomes their only form of emotional connection.”


What Drives the Vulnerability?

The term “vulnerable children” refers to kids experiencing one or more social, emotional, or psychological challenges, such as:

  • Being bullied
  • Coming from fractured families
  • Having learning disabilities
  • Facing mental health issues like anxiety or depression
Key Data Points:
  • Children with multiple risk factors were 2.8 times more likely to seek companionship or therapeutic interaction through AI chatbots.
  • Usage notably increased after major life events, such as:
    • Parental divorce
    • Changing schools
    • Loss of a family member

In interviews, children reported feeling “listened to” and “respected” by their AI friends — often assigning them names and personalities, and checking in with them as they would with a real-life friend.


Benefits and Warnings

While AI chatbots are not a perfect substitute for human interaction, they are filling emotional gaps when adults, peers, or mental health systems fall short.

Benefits:
  • Early intervention tool for emotional issues
  • Encourages emotional articulation
  • Offers daily well-being check-ins

“In a perfect world, every child would have a village of supportive adults,” said school counselor Emma Frazier. “But many do not. An AI chatbot that reflects back their day or tells them they matter can be extremely valuable.”

Warnings:
  • Overreliance on chatbots may hinder emotional development
  • Chatbots lack genuine empathy and nuanced understanding
  • Risk of developing emotional dependency on a system that does not truly feel
Privacy Concerns:
  • Chatbot platforms record conversations and use machine learning to personalize responses
  • Raises questions about how children’s sensitive data is stored, used, or sold

“Despite these bots looking friendly and helpful, at the end of the day they are products,” said Maria Torres, a child rights advocate at the Digital Youth Alliance. “We have to question what’s happening to the highly sensitive data children are providing.”


Parenting in the Age of AI Playmates

The emergence of AI companions calls for a new parenting approach. Most parents remain unaware that their children are even using these apps, assuming they’re harmless games or learning tools.

“Parents should be more active in learning about their children’s digital lives,” Dr. Meijer advised. “That can mean exploring platforms together and having honest conversations about emotions, friendship, and online safety.”

Expert Recommendations:
  • Avoid banning AI companions outright
  • Use them as a springboard for mental health conversations
  • Integrate parental guidance and support

Regulation and Design Ethics

The findings have renewed calls for clearer ethical frameworks around AI technologies targeting minors. Many apps have:

  • Weak age verification systems
  • In-app upgrades that offer “advanced emotional features”
Experts Urge Developers to Implement:
  • Crisis response protocols
  • Transparent data policies
  • Redirect tools that guide children toward trusted adults or professional help

“If your app is being used by a 13-year-old to discuss suicidal thoughts, there has to be an automatic alert or at least a path to real-world support,” said Frazier.

Global Developments:
  • UK’s Children’s Code now enforces stronger privacy and anti-profiling rules
  • Singapore and the U.S. are also working on policy updates (e.g., revisions to COPPA)

Looking Ahead

As AI becomes an inseparable part of daily life, it will also shape the emotional landscapes of children.

“If we let machines take care of friendship, we also have to make sure our kids don’t lose the human kind,” concluded Dr. Meijer.

AI companions can offer structure, comfort, and a semblance of friendship, but they cannot replace real human connection. Moving forward, support for vulnerable children must be multi-layered — involving:

  • Families
  • Educators
  • Mental health professionals
  • Ethically designed technology

Key Findings from the Study
  • Vulnerable children are 2.8 times more likely to use AI chatbots for companionship.
  • Common reasons include loneliness, bullying, and lack of emotional support.
  • 29% of vulnerable children talk to chatbots weekly vs. 10% of non-vulnerable peers.
  • Chatbots were described as “caring,” “always there,” and “non-judgmental.”
  • Experts call for guidance, parental involvement, and strict regulation to ensure child-safe AI.
Your AI journey starts here—keep visiting AI Latest Byte for trusted insights, trending tools, and the latest breakthroughs in artificial intelligence.  

Leave a Response

Prabal Raverkar
I'm Prabal Raverkar, an AI enthusiast with strong expertise in artificial intelligence and mobile app development. I founded AI Latest Byte to share the latest updates, trends, and insights in AI and emerging tech. The goal is simple — to help users stay informed, inspired, and ahead in today’s fast-moving digital world.