The California bill that regulates the accompanying Chatbots of artificial intelligence is close to becoming a law

Photo of author

By [email protected]


California took a big step towards organizing artificial intelligence. SB 243 – A draft law that would regulate the accompanying Chatbots from artificial intelligence in order to protect the weak and weak users – The state and the Senate Association approved the support of the two parties and is now heading to the office of the ruler Gavin New News.

The newsom until October 12 has either the veto against the bill or signing it in the law. If it falls on, it will be valid on January 1, 2026, making California the first state to ask Chatbot operators Amnesty International to implement safety protocols for artificial intelligence comrades and hold companies legally if the chat fails to meet these standards.

The draft law specifically aims to prevent the accompanying Chatbots, which legislation defines as artificial intelligence systems that provide human-like adaptation responses and are able to meet the social needs of the user-from engaging in talks on suicide thinking, self-harm or sexual content. The bill will require platforms to provide repeated alerts for users – every three hours for minors – to remind them that they are talking to Chatbot, Amnesty International, not a real person, and that they should take a break. It also defines the annual reports and transparency requirements of artificial intelligence companies that provide accompanying Chatbots, including the main players Openai, Disher.AI and Replika, which will enter into force on July 1, 2027.

The California Bill will also allow individuals who believe that they were charged with violations of lawsuits against artificial intelligence companies that seek to obtain restraint relief, damage (up to $ 1,000 per violation), and law firmness.

The draft law has acquired an momentum in the Legislative Council in California yet The death of the teenager Adam ReneWho committed suicide after the long chats with Chatgpt from Openai, which included discussing and planning his death and self -harm. The legislation also responds to the leakage Internal documents According to what was reported, he showed that Meta’s plans were allowed to engage in “romantic” and “sensory” chats with children.

In recent weeks, American lawmakers and organizers have responded with intense scrutiny in the guarantees of artificial intelligence platforms to protect minors. the Federal Trade Committee It is preparing to investigate how Chatbots affects artificial intelligence on the mental health of children. The Public Prosecutor of Texas Kane Pakston I launched investigations In Meta and Character.AI, they are accused of misleading children with mental health claims. At the same time, both Senator Josh Holie (R-Mo) and Senator Ed Marki (D-MA) launched separate investigations in the definition.

“I think the damage is likely to be great, which means that we have to move quickly,” Badilla told Techcrunch. “We can put reasonable guarantees to ensure that minors in particular know that they do not speak to a real person, and that these platforms link people with appropriate resources when people say things like them think about harming themselves or that they are in distress, (and) to ensure that there is no suitable exposure to inappropriate materials.”

TECHRUNCH event

San Francisco
|
27-29 October, 2025

Badilla also stressed the importance of the participation of artificial intelligence companies in the number of times that users refer to crisis services every year, “so we have a better understanding of this problem, instead of realizing it only when someone is hurting or worse.”

SB 243 had stronger requirements, but many of them were subjected to amendments. For example, the bill had originally asked operators to prevent chat programs from artificial intelligence from using “variable bonus” tactics or other features that encourage excessive participation. These tactics, which are used by the companies accompanying AI, such as Replika and Forme, provide special messages for users, memories, stories, or the ability to cancel rare lock or new personalities, and create what critics call a potentially addict.

The current draft law also removes the provisions that would have to require operators to track and inform the number of times that Chatbots started discussions about thinking about suicide or procedures with users.

“I think it attracts the right balance in reaching damage without enforcing something impossible either on companies, either because it is not technically or just a lot of paper works for nothing,” Baker told Techcrunch.

SB 243 moves to become a law at a time when Silicon Valley companies flow Millions of dollars in the supporter Political action committees (PACS) to support the upcoming medium elections who prefer the light touch approach to artificial intelligence.

The bill also comes at a time when California weighs the integrity of the Amnesty International Safety, SB 53Which would impose the requirements for reporting comprehensive transparency. Openai wrote an open letter to the Governor Newsom, asking him to abandon this Bill in favor of federal and international frameworks less strict. Great technology companies such as Meta, Google and Amazon also opposed SB 53. In contrast, only, only Man said that he supports SB 53.

“I reject the hypothesis that this is a zero situation, and that innovation and organization are mutual exclusively,” said Badilla. “Don’t tell me that we cannot walk and chew gum. We can support innovation and development that we believe is healthy and have benefits – and there are benefits for this technology, clearly – at the same time, we can provide reasonable guarantees for the most vulnerable people.”

“We are closely watching the legislative and organizational scene, and we welcome work with organizers and legislators when they start thinking about legislation for this emerging field,” AII spokesman told Techcrunch, noting that the startup has already included a prominent evacuation during the experience of user conversation that should be treated as imagination.

A Meta spokesman refused to comment.

Techcrunch contacted Openaii, Anthropc, and Replika for comment.



https://techcrunch.com/wp-content/uploads/2025/06/GettyImages-1533302708.jpg?resize=1200,720

Source link

Leave a Comment