Congress may stop the AI ​​regulations for the state. What does it mean to you and your privacy

Photo of author

By [email protected]


Countries will not be able to apply their regulations to artificial intelligence technology For a decade, according to a plan to be considered in the US House of Representatives. the legislationin amendment It was accepted this week by the Energy and Trade Committee in the House of Representatives. It is not permissible for any country or political division to “implement any law or regulation that regulates artificial intelligence models, artificial intelligence systems or mechanical decision -making systems for a period of 10 years. The proposal will still need the approval of both the Congress rooms and President Donald Trump before it became a law.

The Atlas of Artificial Intelligence

Artificial intelligence developers and some legislators said that federal measures are necessary to prevent states from creating a group of different rules and regulations throughout the United States that can slow down technology growth. Rapid growth in artificial intelligence since then Chatgpt It exploded at the scene of the accident in late 2022, which led the companies to fit technology in the largest possible number of spaces. Economic effects are important, as the United States and China race to find out any country technology will prevail, but obstetric artificial intelligence constitutes privacy, transparency and other risks for consumers who lawmakers sought to mitigate.

“We need, as an industry and a country, one of the pure federal standard, whatever it is,” Alexander Wang, founder and director of the data company. April hearing. “But we need one, we need clarity regarding one federal criteria and we have a recovery to prevent this result as you have 50 different criteria.”

Efforts to reduce the ability of states to organize artificial intelligence may mean fewer consumer protection about technology that is increasingly leaking to every aspect of American life. “There have been a lot of discussions at the state level, and I think it is important for us to deal with this problem at multiple levels,” said Anjana Susara, a professor at Michigan State University, who is studying Amnesty International. “We can handle it at the national level. We can deal with it at the state level as well. I think we need both.”

Several states have already started organizing artificial intelligence

The proposed language will prevent states from enforcing any list, including those already on books. Exceptions are the rules and laws that make things easier to develop artificial intelligence and those that apply the same standards to other models and systems that do similar things. These types of regulations have already started to appear. The largest concentration is not in the United States, but in Europe, where the European Union has already implemented Artificial intelligence standards. But the countries started entering into work.

Colorado A group passed From the protection of consumers last year, it is scheduled to enter into force in 2026. California adopted more than ten related Amnesty International Laws last year. Other countries have laws and regulations that often deal with specific issues Like Deepfakes Or ask artificial intelligence developers to publish information about their training data. At the local level, some regulations also deal with a potential discrimination of employment if artificial intelligence systems are used in employment.

“Countries all over the map when it comes to what they want to organize in artificial intelligence,” said Arsene Corinian, a partner in the law firm Mair Brown. To date in 2025, the state lawmakers have been presented at least 550 proposals About artificial intelligence, according to the National Conference of the State Legislative Councils. At the hearing of the House of Representatives Committee last month, Representative Jay Obernol, a Republican of California, indicated the desire to apply for more regulations at the state level. “We have a limited amount of the legislative runway to be able to solve this problem before the United States advances,” he said.

While some countries have laws on books, not all of them entered into force or saw any application. This limits the potential effect on the short term of the priority. “There is no application yet.”

Zeville Caagan said fuel would deter lawmakers and policy makers from developing and proposing new regulations. “The federal government will become the primary and monotheistic organization on artificial intelligence systems,” he said.

What a stood in the organization of the state artificial intelligence

Artificial intelligence developers have requested that any handrail placed on their work is consistent and simplified. During the Senate Trade Committee Listen last weekThe CEO of Openai Sam Altman told the Senator Ted Cruz, a Republican of Texas, that an organizational system similar to the European Union will be “catastrophic” for the industry. Altman instead suggested that the industry develop its own standards.

When asked about Senator Brian Chatez, a democratic of Hawaii, if the self -organization of the industry is sufficient at the present time, Al -Taman said he believed that some handrails will be good, but “it is easy to go very far away. Since I learned more about how the world works, I am much more afraid and have very bad consequences.” (Disclosure: Zif Davis, the parent company of CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating its AI systems).

Corinian said fears of companies – each of the developers who create artificial intelligence systems and “publishers” who use them in interactions with consumers – often stems from fears that countries will impose a great work such as impact assessments or transparency notifications before the product is issued. Consumer advocates said that more regulations are needed, and the ability of countries’ ability to harm the privacy and safety of users.

“Artificial intelligence is widely used to make decisions about people’s lives without transparency, accountability, or asylum – it also facilitates fraud, plagiarism and monitoring,” said Ben Winters, the director of artificial intelligence and privacy at the Consumer Union in America. “Stopping for 10 years may lead to more discrimination, more deception and simply control, it stands with technology companies on the people who affect him.”

Corinian said that the opening endowment on the specific rules and laws of the state could lead to the treatment of more consumer protection cases in the court or by state lawyers. Current laws on unfair and misleading practices that do not belong to artificial intelligence are still applied. “The time will tell us how judges will explain these cases,” he said.

Sosarla said that the spread of artificial intelligence through industries means that countries may be able to organize issues such as privacy and transparency on a wider scale, without focusing on technology. But standing on the regulation of artificial intelligence can lead to such policies associated with lawsuits. She said, “There must be a kind of balance between” we don’t want to stop innovation “, but on the other hand, we also need to realize that there can be real consequences.”

Zeville Caagan said many policy on the governance of artificial intelligence systems is caused by the so -called annoying rules and laws of technology. “It is also useful to remember that there are a lot of existing laws, and there is a possibility to prepare new laws that do not lead to standing on the endowment, but applies to artificial intelligence systems as long as they apply to other systems,” he said.





https://www.cnet.com/a/img/resize/31b06f1bbd53211efcab1045e86e0443ba3f894e/hub/2025/05/13/581070e4-da9e-44d4-a998-002c03762556/gettyimages-2214102949.jpg?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment