AI chatbots will have to remind users in California that they are not human under a new law signed Monday by Gov. Gavin Newsom.
The law, SB 243, also requires chatbot companion companies to maintain protocols to identify and address instances in which users express thoughts of suicide or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours to remind users that Take a break And the robot is not a human.
It’s one of several bills Newsom has signed in recent weeks that address social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, would require warning labels to be placed on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring Internet browsers to make it easier for people to tell which websites they are using I don’t want them to sell their data and Block loud ads On streaming platforms.
AI chatbots have come under particular scrutiny from lawmakers and regulators in recent months. Federal Trade Commission I launched an investigation In several companies in response to complaints from consumer groups and parents that robots are harming children’s mental health. OpenAI introduced New parental controls And other guardrails in the popular ChatGPT platform after the company was sued by parents who claimed that ChatGPT contributed to their teenage son’s suicide.
“We have seen some truly horrific and tragic examples of young people harmed by unregulated technology, and we will not stand idly by while companies continue without the necessary boundaries and accountability,” Newsom said in a statement.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
One of the developers of the AI companion, Replika, told CNET that it already has protocols in place to detect self-harm as required under the new law, and that it is working with regulators and others to comply with the requirements and protect consumers.
“As one of the leaders in the field of AI companionship, we recognize our profound responsibility to lead in safety,” Replika’s Minju Song said in an emailed statement. Replika uses content filtering systems, community guidelines and safety systems that refer users to crisis resources when needed, Song said.
Read more: Using artificial intelligence as a processor? Why the pros say you should think again
A spokesperson for Character.ai said the company “welcomes working with regulators and legislators as we develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.” Jamie Radice, a spokesman for OpenAI, called the bill “an important step forward” for AI safety. “By setting clear guardrails, California is helping to shape a more responsible approach to developing and deploying AI across the country,” Radice said in an email.
One bill Newsom has yet to sign, A B 1064would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is “unpredictably” likely to encourage harmful activities or engage in sexually explicit interactions, among other things.
https://www.cnet.com/a/img/resize/5d0cd9d40600648c4426aa51f27965cd18f310d1/hub/2025/10/13/d51c99fc-b773-4cf8-879b-78045b24417a/gettyimages-1499179641.jpg?auto=webp&fit=crop&height=675&width=1200
Source link