Scott Winner on his battle to make Big Tech reveal the risks of artificial intelligence

Photo of author

By [email protected]


This is not the first Senator at the state of California Scott Winner to address the risks of artificial intelligence.

In 2024, Silicon Valley was launched Fierce Against the controversial safety bill, SB 1047, which would have made technology companies responsible for the possible damage to artificial intelligence systems. Technology leaders have warned that it would suffocate the boom of the American Amnesty International. Governor Gavin Newsom at the end The bill is reversedEching similar concerns, and a house of famous artificial intelligence immediately throw “SB 1047 Veto Party.” One of the attendees told me, “Praise be to God, Amnesty International is still legal.”

Now Wiener has returned with the new Amnesty International Salama bill, SB 53, Who sits on the office of the Governor Newsom, waiting for his signature or veto at some point in the next few weeks. This time, the bill is more popular or at least, the silicon valley does not seem to be in war with it.

The human being explicitly Support SB 53 earlier this month. Meta Jim Cullin Techcrunch tells that the company supports the artificial intelligence list that balances handrails with innovation and says “SB 53 is a step in this direction”, although there are areas of improvement.

The former White House Policy Adviser, Dean Ball, Techcrunch that SB 53 is a “victory for reasonable sounds”, and believes that there is a strong opportunity for the news ruler.

If signed, SB 53 will impose some of the first reporting requirements in the country on artificial intelligence giants such as Openai, Anthropic, Xai and Google – today don’t face any commitment to revealing how to test their artificial intelligence systems. Many AI laboratories voluntarily publish safety reports that explain how to use artificial intelligence models to create vital weapons and other risks, but they do so as desired and They are not always consistent.

the invoice The leadership of artificial intelligence laboratories – specifically those who achieve more than $ 500 million – requires to publish safety reports for the most capable artificial intelligence models. It is very similar to SB 1047, the draft law is particularly focused on the worst types of artificial intelligence: its ability to contribute to human deaths, electronic attacks and chemical weapons. Governor Newsom is studying many other bills that deal with other types of artificial intelligence, such as Participation improvement techniques In artificial intelligence comrades.

TECHRUNCH event

San Francisco
|
27-29 October, 2025

SB 53 also creates protected channels for AI LABS employees to report safety concerns for government officials, and establish a state -run cloud computing group, Calcompute, to provide research resources from artificial intelligence beyond major technology companies.

One of the reasons why SB 53 may be more popular than SB 1047 is that it is less severe. SB 1047 would make artificial intelligence companies responsible for any damage caused by artificial intelligence models, while SB 53 focuses more on the application for self -reporting and transparency. SB 53 also tightly applies to the largest technology companies in the world, instead of startups.

But many in the technology industry still believe that countries should leave the list of artificial intelligence to the federal government. in Modern message For Governor Newsom, Openai has argued that artificial intelligence laboratories should only have to comply with federal standards – this is funny to say to the position of state ruler. The project company Andreen Horwitz has recently written Blog post Mysteriously indicates that some bills in California can violate the condition of the sleeping trade of the constitution, which prohibits states to limit the trade between states unfairly.

Senator Winner deals with these concerns: It lacks the belief in the federal government to pass a meaningful intelligence integrity list, so the states need progress. In fact, Wiener believes that the Trump administration has been arrested by the technology industry, and that recent federal efforts to prevent all artificial intelligence laws in the state are a form of Trump’s “reward for its financiers.”

The Trump administration has taken a noticeable shift from the Biden administration’s focus on the integrity of artificial intelligence, and replaced it with a focus on growth. Soon after assuming his post, Vice President JD Vance appeared in Artificial Intelligence Conference In Paris, he said: “I am not here this morning to talk about the safety of artificial intelligence, which was the title of the conference two years ago. I am here to talk about the opportunity of artificial intelligence.”

The Silicon Valley praised this transformation, which is represented by the Trump’s artificial intelligence plan, which Remove barriers To build the infrastructure needed to train and serve artificial intelligence models. Today, the senior executives were seen regularly Eat at the White House Or advertising Data centers of one hundred billion dollars Besides President Trump.

Senator Winner believes it is important for California to lead the nation to the integrity of artificial intelligence, but without suffocation of innovation.

Senator Wener recently met to discuss his years on the negotiating table with Silicon Valley and why he focuses on artificial intelligence integrity bills. Our conversation was lightly edited for clarity and brevity. My questions are bold, and his answers are not.

Maxwell Zev: Senator Winner, I met you when SB 1047 was sitting on the office of the Governor of Newsom. Talk to me about the journey you were to organize the safety of artificial intelligence in the past few years.

Scott Wiener: The rotating ship was an incredible educational experience, and is really rewarding. We have been able to help raise this problem (from the integrity of artificial intelligence), not only in California, but in national and international discourse.

Our incredibly strong new technology that changes the world. How do we make sure that it benefits humanity in a way that leads the risks? How do we promote innovation, while we also realize public health and general safety. It is an important conversation – in some existential aspects – about the future. Help 1047, and now SB 53, enhance that conversation about safe innovation.

In the past twenty years of technology, what have you learned about the importance of laws that could represent Silicon Valley?

I am the man who represents San Francisco, the heart of creating artificial intelligence. I am directly north of Silicon Valley itself, so we are here in the middle of everything. But we also saw how large technology companies – some of the richest companies in the history of the world – managed to stop the federal organization.

Every time I see that technology executives are having dinner at the White House with ambitious dictator, I should breathe deeply. These are all wonderful people who were truly born wealth. Many people I represent for them. It really hurts me when I see the deals that are concluded in the Kingdom of Saudi Arabia and the United Arab Emirates, and how this money It is converted into Mimi Trump. It causes me deep concern.

I am not an anti -technology person. I want technology innovation. It is very important. But this is an industry that we should not trust to organize itself or make voluntary obligations. This does not give aspirations for anyone. This is capitalism, and it can create a tremendous prosperity but also causes damage if there are no reasonable regulations to protect the public interest. When it comes to the integrity of artificial intelligence, we are trying to tie that needle.

SB 53 focuses on the worst damage that artificial intelligence can cause – death, huge electronic attacks, and the creation of biological weapons. Why focus there?

The risks of artificial intelligence vary. There is a distinction of algorithm, job loss, deep counterfeit, and fraud. There were different bills in California and other places to treat these risks. SB 53 was not intended to cover the field and treat all risks created by artificial intelligence. We focus on one specific class of risk, in terms of catastrophic risks.

This issue came to me organically from people in the space of artificial intelligence in San Francisco – the founders of startups, artificial intelligence technicians, and people who build these models. They came to me and said: “This is an issue that must be addressed in a deliberate way.”

Do you feel that artificial intelligence systems are not safe, or have the ability to cause death and huge electronic attacks?

I don’t think it is safe by nature. I know that there are many people working in these laboratories who are deeply interested in trying to reduce risk. Once again, it is not a matter of getting rid of risks. Life revolves around the risk, unless you live in your basement and never leave, you will face a danger in your life. Even in your lower floor, the ceiling may fall.

Is there a risk of using some artificial intelligence models to cause great harm to society? Yes, we know that there are people who love to do this. We must try to make it difficult for bad actors to cause these severe damages, as well as people who develop these models.

Anthropor released its support for SB 53. What are your conversations with other players in the industry?

We have talked to everyone: large companies, small startups, investors and academics. Man was really constructive. Last year, he never supported them (SB 1047), but they have positive things they could say about the aspects of the bill. I do not think (Anthropor) loves every aspect of SB 53, but I think they concluded that the bill was worth support.

Talks were held with large Amnesty International laboratories that do not support the bill, but they are not in war with her the way she was with SB 1047. This is not surprising. SB 1047 was more than one responsibility bill, SB 53 was more than a transparent bill. Startups were less involved this year because the draft law really focuses on the largest companies.

Do you feel pressure from the large AI PACS formed in recent months?

This is one of the symptoms of the United citizens. The world’s richest companies can pour endless resources in this PACS to try to intimidate elected officials. Under our rules, they have all the right to do so. It never affects how to deal with politics. There were groups trying to destroy me as long as I was in his elected position. Various groups spent millions in an attempt to detonate me, and here I am. I am in this to do correctly through my voter and try to make my community, San Francisco, and the world a better place.

What is your message to the Governor Newsom while discussing whether it will sign or the veto against this law?

My message is that we heard. I transmitted the psychological SB 1047 and presented a very comprehensive and deliberate veto. I wisely held a working group that produced a very strong report, and we really looked at this report in formulating this bill. The ruler put a path, and we followed this path to reach an agreement, and I hope we got there.



https://techcrunch.com/wp-content/uploads/2025/09/GettyImages-2218026592.jpg?resize=1200,804

Source link

Leave a Comment