The California State Association took a big step towards the organization of artificial intelligence on Wednesday night, through SB 243 A draft law that regulates the accompanying chatbots from artificial intelligence in order to protect the minor and weak users. The legislation was approved with the support of the two parties and is now heading to the Senate for the final vote on Friday.
If the ruler Gavin News signs the bill in the law, this will be ordered on January 1, 2026, making California the first state to ask Chatbot Amnesty International operators to implement safety protocols for artificial intelligence comrades and hold companies legally if Chatbots fails to meet these standards.
The draft law specifically aims to prevent the accompanying Chatbots, which legislation defines as artificial intelligence systems that provide human-like adaptation responses and are able to meet the social needs of the user-from engaging in talks on suicide thinking, self-harm or sexual content. The bill will require platforms to provide repeated alerts for users – every three hours for minors – to remind them that they are talking to Chatbot, Amnesty International, not a real person, and that they should take a break. It also defines the annual reports and transparency requirements of artificial intelligence companies that provide accompanying Chatbots, including the main players Openai, FARACTER.AI and Replika.
The California Bill will also allow individuals who believe that they were charged with violations of lawsuits against artificial intelligence companies that seek to obtain restraint relief, damage (up to $ 1,000 per violation), and law firmness.
SB 243, which was presented in January by the state members of the state, Steve Badilla and Josh Baker, will go to the state’s Senate for the final vote on Friday. If approved, the Gavin Newsom will be signed in the law, with the new rules intervention on January 1, 2026 and the requirements of reports that start on July 1, 2027.
The draft law has acquired an momentum in the Legislative Council in California yet The death of the teenager Adam ReneWho committed suicide after the long chats with Chatgpt from Openai, which included discussing and planning his death and self -harm. The legislation also responds to the leakage Internal documents According to what was reported, he showed that Meta’s plans were allowed to engage in “romantic” and “sensory” chats with children.
In recent weeks, American lawmakers and organizers have responded with intense scrutiny in the guarantees of artificial intelligence platforms to protect minors. the Federal Trade Committee It is preparing to investigate how Chatbots affects artificial intelligence on the mental health of children. The Public Prosecutor of Texas Kane Pakston I launched investigations In Meta and Character.AI, they are accused of misleading children with mental health claims. At the same time, both Senator Josh Holie (R-Mo) and Senator Ed Marki (D-MA) launched separate investigations in the definition.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
“I think the damage is likely to be great, which means that we have to move quickly,” Badilla told Techcrunch. “We can put reasonable guarantees to ensure that minors in particular know that they do not speak to a real person, and that these platforms link people with appropriate resources when people say things like them think about harming themselves or that they are in distress, (and) to ensure that there is no suitable exposure to inappropriate materials.”
Badilla also stressed the importance of the participation of artificial intelligence companies in the number of times that users refer to crisis services every year, “so we have a better understanding of this problem, instead of realizing it only when someone is hurting or worse.”
SB 243 had stronger requirements, but many of them were subjected to amendments. For example, the bill had originally asked operators to prevent chat programs from artificial intelligence from using “variable bonus” tactics or other features that encourage excessive participation. These tactics, which are used by the companies accompanying AI, such as Replika and Forme, provide special messages for users, memories, stories, or the ability to cancel rare lock or new personalities, and create what critics call a potentially addict.
The current draft law also removes the provisions that would have to require operators to track and inform the number of times that Chatbots started discussions about thinking about suicide or procedures with users.
“I think it attracts the right balance in reaching damage without enforcing something impossible either on companies, either because it is not technically or just a lot of paper works for nothing,” Baker told Techcrunch.
SB 243 moves to become a law at a time when Silicon Valley companies flow Millions of dollars in the supporter Political action committees (PACS) to support the upcoming medium elections who prefer the light touch approach to artificial intelligence.
The bill also comes at a time when California weighs the integrity of the Amnesty International Safety, SB 53Which would impose the requirements for reporting comprehensive transparency. Openai wrote an open letter to the Governor Newsom, asking him to abandon this Bill in favor of federal and international frameworks less strict. Great technology companies such as Meta, Google and Amazon also opposed SB 53. In contrast, only, only Man said that he supports SB 53.
“I reject the hypothesis that this is a zero situation, and that innovation and organization are mutual exclusively,” said Badilla. “Don’t tell me that we cannot walk and chew gum. We can support innovation and development that we believe is healthy and have benefits – and there are benefits for this technology, clearly – at the same time, we can provide reasonable guarantees for the most vulnerable people.”
Techcrunch contacted Openai, Anthropic, Meta, AI, and Replika for comment.
https://techcrunch.com/wp-content/uploads/2025/06/GettyImages-1533302708.jpg?resize=1200,720
Source link