California Governor Gavin Newsom I fell Historic bill on that Monday Regulates AI-accompanied chatbotsThis makes it the first state in the country to require AI chatbot operators to implement security protocols for AI escorts.
The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI chatbot use. It holds companies — from big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika — legally accountable if their chatbots fail to meet the law’s standards.
SB 243 was introduced in January by state Senators Steve Padilla and Josh Baker, and gained traction after… Teenager Adam Ren dieswho died by suicide after a long series of suicidal conversations with OpenAI’s ChatGPT. The legislation also responds to what was leaked Internal documents Which is said to have been shown to allow Meta’s chatbots to engage in “romantic” and “sensual” conversations with children. Recently, The Colorado family sued against role-playing startup Character AI after their 13-year-old daughter committed suicide following a series of problematic and sexual conversations with the company’s chatbots.
“Emerging technology like chatbots and social media can inspire, educate, and connect our children, but without real guardrails, technology can also exploit, mislead, and endanger our children,” Newsom said in a statement. “We have seen some truly horrific and tragic examples of young people harmed by unregulated technology, and we will not stand idly by while companies continue without the necessary boundaries and accountability. We can continue to lead in AI and technology, but we must do so responsibly – protecting our children every step of the way. Our children’s safety is not for sale.”
SB 243 will take effect on January 1, 2026, and requires businesses to implement certain features such as age verification, social media warnings, and companion chatbots. The law also imposes harsher penalties for those who profit from illegal deepfakes, including up to $250,000 per crime. Companies must also create protocols for addressing suicide and self-harm, which will be shared with the state Department of Public Health along with statistics on how the service provides users with crisis center prevention notifications.
According to the language of the bill, platforms must also make clear that any interactions are artificially generated, and chatbots must not represent themselves as health care professionals. Companies are required to provide reminders to minors and prevent them from viewing sexually explicit images generated by the chatbot.
Some companies have already begun implementing safeguards targeting children. For example, OpenAI recently started rolling out Parental controlsContent protection and self-harm detection system for children using ChatGPT. Character AI said its chatbot includes a disclaimer that all chats are generated by artificial intelligence and are fictional.
TechCrunch event
San Francisco
|
October 27-29, 2025
Senator Padilla told TechCrunch that the bill was “a step in the right direction” toward putting guardrails on “incredibly powerful technology.”
“We have to act quickly so we don’t waste the opportunities before they disappear,” Padilla said. “I hope other states see the danger. And I think many do. And I think this is a conversation that’s happening across the country, and I hope people take action. The federal government certainly hasn’t, and I think we have an obligation here to protect the most vulnerable among us.”
SB 243 is the second significant AI regulation to come out of California in recent weeks. On September 29, Gov. Newsom SB 53 was signed into law into law, placing new transparency requirements on large AI companies. The bill requires that large AI labs, such as OpenAI, Anthropic, Meta, and Google DeepMind, be transparent about safety protocols. It also ensures the protection of whistleblowers for employees in those companies.
Other states, such as Illinois, Nevada, and Utah, have passed laws to restrict or ban the use of AI-powered chatbots as an alternative to licensed mental health care.
TechCrunch has reached out to Character AI, Meta, OpenAI, and Replika for comment.
This article has been updated with comment from Senator Padilla.
https://techcrunch.com/wp-content/uploads/2025/10/GettyImages-2240040875.jpg?w=1024
Source link