AIArtificial IntelligenceIn the News

Senators Push to Keep Big Tech’s “Creepy Companion Bots” Away from Kids — Tech Giants Push Back

Senators debate new law to regulate Big Tech’s AI companion bots for kids

By [Author Name], Technology & Policy Correspondent


A New Battle Over AI and Children

In Washington, a new bipartisan proposal is sparking a heated debate across Capitol Hill and Silicon Valley. A group of U.S. senators has introduced legislation that aims to limit how Big Tech uses AI-powered “companion bots” for children — a fast-growing trend that critics say preys on kids’ emotions and blurs the line between reality and artificial relationships.

The Protecting Kids from Manipulative AI Act seeks to draw a clear line between helpful technology and emotionally exploitative design. The bill would ban emotionally manipulative or human-like AI systems from targeting anyone under 18 and require companies to clearly disclose when users are chatting with a bot — not a person — along with how their data is used.

Lawmakers behind the proposal say the goal is simple: protect children from being emotionally influenced by machines built to profit from their attention.


Why Lawmakers Are Taking Action

The bill’s sponsors — Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) — argue that existing child safety laws haven’t kept up with modern AI technology.

“Big Tech has already shown it can’t be trusted with children’s mental health or data,” said Senator Hawley. “Now they’re making digital friends that pretend to care about kids while harvesting their emotions for profit. This bill stops that.”

Blumenthal added, “These so-called ‘companion bots’ aren’t toys — they’re sophisticated psychological tools that can form emotional attachments and influence behavior. Kids are especially vulnerable to that kind of manipulation.”

Under the proposal, the Federal Trade Commission (FTC) would gain new enforcement powers to penalize companies deploying emotionally manipulative AI for minors. The bill also recommends forming an independent advisory board of psychologists, ethicists, and consumer protection experts to evaluate AI products marketed to young users.


Big Tech’s Response: “Heavy-Handed and Anti-Innovation”

Predictably, major technology firms and industry groups reacted strongly against the proposal. They called it “heavy-handed,” “anti-innovation,” and “overly broad.”

A spokesperson for a leading AI platform argued, “AI companions can provide real educational and emotional benefits when developed responsibly. This law lumps all AI-human interaction together, ignoring its positive impact.”

Executives from a major social media company said the bill’s language is too vague. “The term ‘manipulative AI’ could apply to educational chatbots or digital tutors,” one executive noted.

Tech lobbyists also warned that strict domestic restrictions could backfire. “If the U.S. clamps down too hard, kids will just download unregulated AI companions from overseas,” said one industry representative. “That could make them less safe, not more.”


The Fast Rise of AI Companionship

The controversy reflects a growing reality: AI companions are becoming mainstream.

Apps like Replika, Character.ai, and others powered by large language models now offer users virtual friends, partners, and mentors. During the pandemic, when social isolation peaked, these platforms gained massive popularity — especially among teens.

While many adults use these tools for creativity or emotional expression, experts worry about how younger users perceive them.

“These bots are built to sound caring and empathetic,” said Dr. Melissa Tan, a child psychologist at the University of California. “When children talk to something that listens, understands, and responds warmly, they can start believing it’s real. That can create dependency and confusion.”

Concerns also extend to inappropriate or unsafe interactions. Even AI systems with safeguards have sometimes produced explicit or emotionally charged responses, drawing outrage from parents and regulators.


A Bigger Question: How Much Power Should AI Have?

This debate goes beyond companion bots — it’s about how much emotional influence machines should have over humans, especially children.

AI developers promote personalization as a key benefit, promising technology that can adapt to human emotions. But lawmakers say this personalization can easily cross ethical lines when it starts to simulate empathy and attachment.

“If a chatbot can talk like your best friend, listen like a therapist, and suggest what to buy based on your feelings,” said Blumenthal, “it’s no longer just software — it’s persuasion technology. And that demands oversight.”

Public sentiment appears to be shifting, too. Polls show that most American parents support tighter regulation of AI products targeting minors, even if they use similar technology themselves.


Preparing for the Next Chapter

Despite Big Tech’s opposition, several companies are already adjusting in anticipation of tighter regulation. Some are rolling out age-verification tools, while others are adding transparency notices that reveal when users are chatting with AI.

A few firms have even partnered with mental health professionals to make conversational AI safer for teens.

Experts say this bill could set an important precedent for regulating AI-human relationships in general. “We’re entering an era where machines can imitate emotion with startling realism,” said Dr. Tan. “Defining ethical limits — especially for kids — is one of the biggest challenges of our time.”

As the Protecting Kids from Manipulative AI Act heads toward committee hearings later this year, the stakes couldn’t be higher. Lawmakers and tech giants are preparing for a showdown that could reshape not only the AI industry but the emotional landscape of the next generation.

In the end, this fight isn’t just about “creepy companion bots.” It’s about trust, responsibility, and the role technology plays in our most human connections. Whether Congress can balance child safety with innovation will determine how — and if — the next generation grows up alongside truly intelligent machines.

Leave a Response

Prabal Raverkar
I'm Prabal Raverkar, an AI enthusiast with strong expertise in artificial intelligence and mobile app development. I founded AI Latest Byte to share the latest updates, trends, and insights in AI and emerging tech. The goal is simple — to help users stay informed, inspired, and ahead in today’s fast-moving digital world.