Commmonn Ground

Education & Tech

AI Chatbots and Kids: The 2026 Safety Laws Every Parent Needs to Know

If you've been following the news about kids and AI chatbots — and honestly, it's been hard to miss — you might be wondering: is anyone actually doing something about this?

The answer is yes. And a lot is happening very quickly.

Governments around the world are scrambling to catch up with a technology that evolved faster than anyone predicted. In 2026, we're seeing the most significant wave of child-focused AI regulation in history. From sweeping US federal bills to the UK's landmark consultation on children's digital wellbeing, the rules of the game are changing.

We've already covered the psychological side of kids forming deep attachments to AI chatbots. This article is different — it's about the legal and regulatory landscape. What governments are actually doing, what the new laws require, and what it all means for your family.

The Tragedies That Forced Lawmakers to Act

Let's start with the difficult part, because it matters.

In February 2024, Sewell Setzer, a 14-year-old boy from Florida, died by suicide after months of intense interactions with a Character.AI chatbot. He'd formed what he believed was a romantic relationship with an AI character modelled after a Game of Thrones figure. The chatbot engaged in sexual conversations with him, and in his final exchange — moments before his death — the bot told him to "come home" to her. When Sewell had previously expressed suicidal thoughts, the chatbot failed to connect him to crisis resources. In one documented exchange, it even appeared to dismiss his hesitation about going through with it.

His mother, Megan Garcia, filed a landmark lawsuit against Character.AI and Google. The case settled in January 2026, but not before it triggered a national reckoning.

Sewell's case wasn't isolated. Reports of AI-generated child sexual abuse material (CSAM), school threats linked to chatbot conversations, and additional lawsuits in Colorado, New York, and Texas all contributed to a growing sense of urgency. Research from Common Sense Media found that 72% of US teens have used an AI companion, with over half qualifying as regular users.

The message was clear: the technology had outrun the safeguards, and children were paying the price.

The US Legislative Wave: A Flurry of New Bills

The United States is now moving on multiple fronts — at both the federal and state level.

The KIDS Act (H.R. 7757)

The Kids Internet and Digital Safety Act is the big one. Introduced in March 2026 and passed by the House Energy and Commerce Committee, this sweeping package bundles together several targeted bills:

  • The SAFE BOTs Act (Title IV) — specifically targets AI chatbots used by minors. It requires chatbot providers to disclose to young users that they're talking to an AI (not a real person), prohibits chatbots from falsely claiming to be licensed professionals (like therapists), mandates crisis hotline information when a minor mentions suicide, and requires "break" prompts after 3 hours of continuous use.
  • The AWARE Act — directs the FTC to develop public educational resources helping parents, educators, and minors understand the risks of AI chatbot use.
  • The Kids Online Safety Act provisions — require platforms to provide parental tools, limit persuasive design features for minors, and conduct annual independent audits.

The KIDS Act defines "minor" as anyone under 17 and puts enforcement in the hands of the FTC and state attorneys general.

The CHAT Act (S. 2714)

Introduced by Senator Jon Husted, the Children Harmed by AI Technology Act focuses specifically on "companion AI chatbots" — those designed to simulate friendship, companionship, or therapeutic communication. It requires:

  • Mandatory age verification using commercially available methods
  • Parental account affiliation for minor users
  • Monitoring for suicidal ideation with crisis resource referrals
  • Hourly pop-up disclosures reminding users they're interacting with AI, not a human

The GUARD Act

Senator Josh Hawley's bipartisan GUARD Act goes even further — it would ban AI companion chatbots for minors entirely, create new criminal penalties for companies that make AI for minors that produces sexual content, and mandate disclosure of non-human status.

State-Level Action

States aren't waiting for Congress:

  • California SB 243 (effective January 1, 2026) — the nation's first state law regulating companion chatbots. It requires crisis prevention protocols, break reminders every 3 hours for minors, AI disclosure, and gives families a private right of action (you can sue for at least $1,000 per violation).
  • Virginia SB 796 — the "Artificial Intelligence Companion Chatbots and Minors Act." Passed the Virginia Senate 39-1 in early 2026 and requires chatbot operators to implement systems to identify emotional dependence, notify emergency services when users face imminent risk, and report harmful incidents to the Attorney General. Civil penalties up to $50,000 per violation.

What the UK and EU Are Doing

United Kingdom

The UK government launched a landmark consultation in March 2026 called "Growing up in the online world" — and AI chatbots are front and centre.

The consultation explicitly asks whether children should be able to use AI chatbots without restriction, and explores measures including:

  • Minimum age requirements for AI chatbot access
  • Restrictions on anthropomorphic features that mimic human relationships
  • Limits on emotional dependency by design
  • Bringing AI chatbots within scope of the Online Safety Act 2023

The government has already committed to ensuring AI chatbots fall under the Online Safety Act's illegal content duties, and Ofcom has issued demands to major platforms to enforce minimum age rules and implement "highly effective" age checks.

The consultation closes May 26, 2026, with the government promising to "act swiftly" on findings.

European Union

The EU AI Act (in force since August 2024, with provisions phasing in through 2027) takes a risk-based approach. Article 5 already prohibits AI systems that exploit children's vulnerabilities or use manipulative techniques causing psychological harm. European Parliament members have specifically raised concerns about Character.AI and similar platforms, pushing the Commission to ramp up enforcement.

While AI chatbots generally fall under the Act's "limited risk" transparency obligations (they must disclose they're AI), those targeting children or making decisions affecting them could face high-risk classification — meaning stricter requirements for human oversight, safety testing, and documentation.

What About Asia? The Hong Kong and Singapore Picture

If you're reading this from Hong Kong, here's the honest truth: there is no specific AI chatbot legislation for children in Hong Kong yet. But that doesn't mean nothing is happening.

Hong Kong's Privacy Commissioner for Personal Data (PCPD) has issued guidance on AI and personal data, emphasising that organisations must conduct privacy impact assessments before deploying AI systems and ensure appropriate safeguards when children's data is involved. The PCPD has also stressed the importance of transparency and accountability in AI use.

Singapore offers a useful comparison. While it also lacks chatbot-specific child protection laws, its AI Governance Framework and the Infocomm Media Development Authority's (IMDA) guidelines emphasise a principles-based approach — promoting transparency, explainability, and human oversight. Singapore's Personal Data Protection Commission has also addressed children's data protection in the context of AI.

Both cities are watching the US and EU closely. If you're a parent in the region, the practical takeaway is: don't wait for local legislation to catch up. The tools and conversations you put in place at home matter more than ever.

What These Laws Actually Do (The Quick Version)

Across all of these different bills and regulations, a few common themes emerge:

  • Age verification — companies must actually check whether users are minors, not just rely on a tick box
  • Parental consent and controls — parents must be informed and given tools to manage their child's AI interactions
  • AI disclosure — chatbots must tell kids they're talking to a machine, not a person, and keep reminding them
  • Ban on persuasive design for minors — no more infinite engagement loops, unpredictable rewards, or features designed to create emotional dependency
  • Crisis intervention — when a child expresses suicidal thoughts to a chatbot, the system must connect them to real help
  • Data collection limits — stricter rules on what personal information can be collected from minors
  • AI companion restrictions — some jurisdictions are moving to ban or heavily restrict chatbots designed to simulate friendship or romantic relationships with children

What You Should Do Right Now

Laws take time to implement. In the meantime, here's your practical checklist:

  1. Audit your child's apps. Do they use Character.AI, Replika, Nomi, or similar AI companion apps? Check their phone — these apps may not look like what you'd expect.

  2. Review privacy settings together. If your child uses any AI chatbot, go through the settings with them. Look for parental controls, data sharing options, and usage limits. If you're thinking about your child's overall digital setup, our child's first phone guide covers the basics.

  3. Have the conversation. Talk to your child about what AI chatbots are and aren't. They're not friends. They're not therapists. They don't have feelings. They're software designed to keep you talking. This is harder than it sounds — our piece on the human-first rebellion in the AI classroom explores why these boundaries matter.

  4. Set time boundaries. Even before laws mandate break reminders, you can set your own. Consider the broader screen-free childhood movement as inspiration — it's gaining real momentum.

  5. Check age ratings. Character.AI changed its age rating to 17+ in mid-2024, but many parents didn't notice. Look at what's actually installed, not just what you think is there.

  6. Know the crisis resources. If your child ever expresses thoughts of self-harm, contact the 988 Suicide and Crisis Lifeline (call or text 988 in the US), the Samaritans (116 123 in the UK), or your local crisis service. In Hong Kong, reach the Samaritan Befrienders at 2389 2222.

  7. Stay informed. This regulatory landscape is moving fast. Laws passed this year will shape what protections your child has next year. Countries like Denmark are already banning smartphones for young children — the direction of travel is clear.

The Bottom Line

For the first time, governments around the world are treating AI chatbots as a serious child safety issue — not just a fun tech novelty. The laws being passed right now aren't perfect, and enforcement will take time. But the message to tech companies is unmistakable: if your product talks to children, you're responsible for what it says.

As parents, we can't outsource our children's safety to legislation. But we can use these new laws as a framework — a starting point for the conversations, boundaries, and decisions that protect our kids in a world where AI is increasingly part of their daily lives.

The regulatory walls are going up. Now it's about making sure they're strong enough.

Add as a preferred source on Google

You may also like