On Monday, California Gov. Gavin Newsom signed legislation enacting new regulations for AI chatbots, making his state the first to require companies to enact safety protocols for AI companion bots.
Quick background: The bills were first introduced by California state lawmakers at the beginning of this year. They later gained momentum in response to the death of California teen Adam Raine, who took his own life in April 2025 following a long series of suicidal conversations with ChatGPT, prompting a wrongful death lawsuit against OpenAI from his parents.
- The new AI legislation also follows the August leak of internal Meta guidelines that showed the tech giant explicitly allowed chatbots to engage in “romantic” and “sensual” chats with children.
What’s in the bills?
Starting January 1, California’s newly signed legislation will require major chatbot operators—including OpenAI, Anthropic, Meta, Character AI, and Replika—to implement a long list of new safeguards.
Under the new laws:
- AI systems must be prevented from encouraging or discussing topics like suicide or self-harm, and instead refer users to suicide hotlines or similar services.
- Companies must offer break reminders to minors every three hours, and prevent them from viewing sexually explicit chatbot-generated images.
- Chatbot operators must warn users of the risks of AI companions, make it clear that interactions are artificially generated, and prevent their bots from pretending to be healthcare professionals.
- Device makers like Apple and Google must implement tools to verify users’ ages before they can use AI chatbot apps.
Some companies are already implementing many of these safeguards. OpenAI in particular has begun rolling out parental controls, content protections, and a self-harm detection system for children under 18 who use ChatGPT.
However…Other groups were opposed to California’s new bill, citing concerns it would stifle innovation. This includes TechNet, an industry group that lobbies lawmakers on behalf of tech executives.
Additionally, a number of child safety groups—including Common Sense Media and Tech Oversight California—came out against the bill, but for a very different reason: its “industry-friendly exemptions.”
Big picture: 72% of Americans between the ages of 13 and 17 say they’ve used AI chatbots for companionship at least once, while over half of that age group (52%) qualifies as regular users who interact with AI companions at least a few times per month, according to new research from Common Sense Media.
📊 Flash poll: In general, which of the following best describes your opinion regarding California’s new landmark law regulating AI companion chatbots?