Google has made one of the most objective changes on Principles of artificial intelligence Since its first publication in 2018, it was monitored by Washington PostThe research giant edited the document to remove the pledges that made it promising "Design or publishing" Artificial intelligence tools for use in any weapons or monitoring technology. Previously, these instructions included a section entitled "Applications we will not follow," It is not present in the current version of the document.
Instead, there is now a section entitled "Responsible and publishing development." There, Google says it will be implemented "Suitable human supervision, obligatory deception, and feedback mechanisms to comply with the goals of the user, social responsibility, principles of international law and widely accepted human rights."
This commitment is much wider than the specified company recently provided as the end of last month when the previous version of the principles of artificial intelligence was still alive on its website. For example, in terms of its connection to the weapons, the company said previously that it will not design artificial intelligence for use in "The weapons or other technologies that are intended or their main implementation is to cause or facilitate the injury of people directly. As for artificial intelligence monitoring tools, the company said it will not develop the technology that violates "International accepted standards."
When asked to comment, a Google Engadget spokesman pointed to A. Blog post The company was published on Thursday. In IT, CEO Deepmind Demis Hassabis and James Manyika, First Vice President of Research, Laborators, Technology and Society in Google, says the appearance of artificial intelligence k "Technology for general purposes" It necessitated the change of policy.
"We believe that democracies should lead to the development of artificial intelligence, guiding with basic values such as freedom, equality and respect for human rights. We believe that companies, governments and organizations that share these values must work together to create artificial intelligence that protects people, enhances global growth and supports national security," The two books. "… guided by our principles of artificial intelligence, we will continue to focus on artificial intelligence research and applications that are in line with our mission, our scientific focus, our experiences of experience, and survival compatible with the principles of international law and human rights always – always specific evaluation through carefully evaluating whether the benefits are The potential risks greatly outweigh."
When Google published AI’s principles for the first time in 2018, I did so in the aftermath Mavin project. It was a controversial governmental contract, if Google decided to renew it, the company would have provided AI to the Ministry of Defense to analyze the drones. Dozens of Google employees Leave In protest against the contract, with thousands of others Signature of a petition in the opposition. When Google ultimately published its new instructions, it was said that the CEO of Sundar Pichai told the employees of his hope to stand "Time test."
However, by 2021, Google began to follow the military contracts again, with what was "violent" tender To hold the shared cloud cloud in the Pentagon. At the beginning of this year, Washington Post I mentioned that Google employees have worked again and again with the Israeli Ministry of Defense Expand the government’s use of artificial intelligence tools.
This article was originally appeared on Engadget on https://www.engadget.com/ai/google-now-thinks-its-ok-to-ose-ai-for-wears-and-surveillance-24824373.html? RSS
[og_img]
Source link