The legislator in California is behind SB 1047 Reignations

Photo of author

By [email protected]


Sinator presented the state of California Scott Winner on Wednesday New modifications To his last bill, SB 53, it will be required The largest artificial intelligence companies in the world to spread safety and security protocols Reports are issued when safety accidents occur.

If it falls into the law, California will be the first state to impose meaningful transparent requirements on the leaders of artificial intelligence developers, probably including Openai, Google, Anthropic and Xai.

Senator Winner Previous artificial intelligence bill, SB 1047, Similar requirements for artificial intelligence model developers for publishing safety reports included. However, the Silicon Valley fought strongly against this bill, and that was Ultimately, the veto against the Governor Gavin Newsom. After that, the California Governor called for a group of artificial intelligence leaders-including the main researcher in Stanford and co-founder of World Labs, Fei Fei Li-to form a political group and set goals for the efforts of artificial intelligence in the state.

California’s artificial intelligence policies have recently published Final recommendationsNoting the need for “industry requirements to publish information about its systems” in order to create a “strong and transparent evidence”. The Senator Winner office said in a press statement that SB 53 modifications were severely affected by this report.

“The draft law is still under progress, and I look forward to working with all stakeholders in the coming weeks to improve this proposal in the most scientific and just scientific law that can be,” Senator Wener said in the statement.

SB 53 aims to achieve a balance that the Governor has claimed that SB 1047 has failed to achieve – perfectly, creating meaningful transparency requirements for the largest developer of artificial intelligence without thwarting the rapid growth of the artificial intelligence industry in California.

“These are fears that my organization and others are talking about for a period “The presence of companies explaining to the public and the government, what measures they take to address these risks, they seem to be a minimum and reasonable step that must be taken.”

The draft law also creates protection from those whose violations of AI Labs personnel believe that their company’s technology is a “extremely important danger” of society – specified in the bill as contributing to the death or injury of more than 100 people, or more than a billion dollars in damage.

In addition, the draft law aims to create a general cloud computing group to support startups and researchers who develop artificial intelligence on a large scale.

Unlike SB 1047, the new Senator Winner bill does not make the developers of artificial intelligence models responsible for the damage of artificial intelligence models. SB 53 is also designed by not forming a burden on startups and researchers who raise artificial intelligence models from leading artificial intelligence developers, or using open source models.

With the new amendments, SB 53 is now heading to the California State Association Committee on Privacy and Consumer Protection for approval. In the event of passing there, the draft law will also need to pass through many other legislative bodies before reaching the news office of the news.

On the other side of the United States, New York Governor Cathy Hochol is now Consider the similar AI safety bill, Law of Rifaat, which will also require artificial intelligence developers to publish safety and security reports.

The fate of Amnesty International’s laws, such as the Law of Lift and SB 53, was in danger Federal lawmakers looked – An attempt to reduce a “mixture” of artificial intelligence laws that companies must navigate. However, this proposal Senate failed 99-1 Voting earlier in July.

“Ensuring the development of artificial intelligence safely should not be controversial – it should be the foundation,” Jeff Raleston, former Y Combinator, said in a statement to Techcrunch. “Congress must be a leadership, demanding transparency and accountability from companies that build border models. But with no serious federal measures on the horizon, the states should be ascended. California SB 53 is a well -thoughtful and organized example of the leadership of the state.”

Even this point, legislators failed to obtain artificial intelligence companies with the requirements of transparency imposed by the state. Antarbur has widely supported The need to increase transparency to artificial intelligence companiesAnd even through Simple optimism about recommendations From the group of artificial intelligence policies in California. But companies like Openai, Google and Meta were more resistant to these efforts.

The developers of the artificial intelligence model usually publish safety reports for their artificial intelligence models, but they have been less consistent in recent months. Google, for example, decided Not publishing a safety report of the most advanced artificial intelligence model, Gemini 2.5 Pro, even months after its availability. Openai also decided The safety report of the GPT-4.1 model. Later, a third -party study appeared indicating that it might be Less compatible than previous artificial intelligence models.

SB is 53 copies of a ton of safety intelligence bills, but it can still force artificial intelligence companies to publish more information than they are today. Currently, they will closely monitor while Senator Winner tests again those limits.



https://techcrunch.com/wp-content/uploads/2024/08/GettyImages-950173010.jpg?resize=1200,861

Source link

Leave a Comment