Google calls for a weak rules of copyright and export in proposing artificial intelligence policy

Photo of author

By [email protected]


Google, Follow up on Openaiand Spreading a policy proposal In response to the Trump administration’s invitation to the national “Amnesty International Action Plan”. The technology giant has supported the weak restrictions of copyrights on training artificial intelligence, as well as the “balanced” export controls that “protect national security with the empowerment of American exports and global work operations.”

“The United States needs to follow an active international economic policy to defend American values ​​and support the innovation of artificial intelligence at the international level,” Google wrote at the document. “For a very long time, the policy -making process of artificial intelligence has led to an inconsistent interest in risk, and often ignores the costs that the wrong regulations can be innovated, national competitiveness, and scientific leadership – a dynamic that has begun to turn under new management.”

One of the most controversial Google recommendations related to the IP protected materials.

Google argues that “fair use, textual mining exceptions and data” two “decisive” “decisive” to develop artificial intelligence and scientific innovation on behalf. Like OpenaiThe company seeks to write down the right and competitors to train on data available to the public – including copyright data – to a large extent without restrictions.

“These exceptions allow the use of copyright and available to the public to train artificial intelligence without greatly affecting the rights holders,” Google wrote, “and often avoids unexpected, unbalanced and long with data holders while developing the form or scientific experimentation.”

Google, which has It is said Trainer a Number of models On general data, copyright, it is fighting Judicial cases With data owners who accuse the company of failing to notify them and compensate them before doing so. American courts have not yet decided whether the doctrine of fair use effectively protects the developers of artificial intelligence from IP claims.

In proposing artificial intelligence policy, Google also faces a problem with Some export controls imposed under the Biden AdministrationAnd it says “may undermine the goals of economic competitiveness” by “imposing inappropriate burdens on American cloud service providers.” This contradicts Google’s competitors such as Microsoft, which in January He said he is “confident” It can be “fully compliance” with the rules.

More importantly, the export rules, which seek to reduce the availability of advanced artificial intelligence chips in unlike countries, publish exemptions for reliable companies looking for large groups of chips.

Elsewhere in its proposal, Google calls for “long -term and sustainable” investments in research and basic development, and to respond to recent federal efforts to Reducing spending and eliminating grant awards. The company said that the government should issue data groups that may be useful for training artificial intelligence, and allocating funding for “research and development in the early market” while ensuring “computing and models” on a large scale “for scientists and institutions.

Referring to the chaotic regulatory environment created by the United States’ laws from the laws of the United States, Google urged the government to pass federal legislation on artificial intelligence, including a comprehensive framework for privacy and security. It slightly more than two months until 2025, The number of artificial intelligence bills in the United States grew to 781According to an online tracking tool.

Google warns the US government against imposing what it considers exhausting obligations about artificial intelligence systems, such as responsibility for use. In many cases, Google argues that the model developer “has little, non -existent or control” on how to use the model, and therefore should not bear responsibility for misuse.

Historically, Google opposed laws like defeated California SB 1047, which I was clearly placed What can be the precautions that the artificial intelligence developer must take before the release of a model that the developers can be responsible for the damage caused by the model.

“Even in cases where the developer provides a direct model for the publisher, publishers are often a better position to understand the risks of estuaries, implement effective risk management, and post -market monitoring and registration,” Google wrote.

Google is also called in its proposal to disclose the requirements such as those that are considered by the European Union “excessively”, and said that the US government must contradict the rules of transparency that require “the detection of commercial secrets, or allows competitors to repeat products, or relinquish national security by providing a road map for opponents on how to overcome models of models or generation.”

An increasing number of countries and countries have approved laws that require artificial intelligence developers to reveal more about how their systems work. California AB 2013 It assumes that companies that develop artificial intelligence systems publish a high -level summary of the data groups they used to train their systems. In the European Union, to comply with the law of artificial intelligence as soon as it enters into force, companies will have to provide typical bulletins with detailed instructions on operation, restrictions and risks associated with the model.



https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-2199793091.jpg?w=1024

Source link

Leave a Comment