Three days later The Trump administration was published Artificial Intelligence Action PlanThe Chinese government has developed its artificial intelligence policy plan. Was the timing a coincidence? i doubt it.
The global “AI Intelligence Governance Action Plan” was issued in China on July 26, the first day of the World Artificial Intelligence Conference (WAIC), the largest annual event in China. Jeffrey Hinton and Eric Schmidt were among many of the figures of the Western technology industry who attended the celebrations in Shanghai. Our colleague Will Knight was at the scene.
The atmosphere was in Waic, the opposite of Trump’s poles First America, vision of the lights list For Amnesty International, it will tell me. In his opening speech, Chinese Prime Minister Lee Qiang presented a realistic issue of the importance of global cooperation in artificial intelligence. It was followed by a series of prominent researchers in Chinese artificial intelligence, who have received technical talks that highlight urgent questions, and it seems that the Trump administration is largely spreading.
Zhou Bowen, leader of the Shanghai Ai Laboratory, one of the best research institutions in China, described his team on the safety of artificial intelligence in WAIC. He also suggested that the government play a role in monitoring artificial intelligence models for weaknesses.
In an interview with WIRED, Yi Zing, a professor at the Chinese Academy of Sciences and one of the country’s leading intelligence voices, said he hopes safety organizations of artificial intelligence from all over the world to find ways of cooperation. “It would be better for the United Kingdom, the United States, China, Singapore and other institutes to gather,” he said.
The conference also included closed meetings on artificial intelligence policy issues. Paul Triolo, a partner in the consulting company DGA-Albright Stonebridge, a partner in the consulting company DGA-Albright Stonebridge Group, said the discussions were fruitful, despite the remarkable absence of American leadership. “As the United States leaves the image,” a coalition of artificial intelligence safety players, in which China, Singapore, the United Kingdom and the European Union participated in leading the efforts made to create the handrails on developing the AI Frontier model. “He added that it was not the United States government that was only missing: among all the major AI laboratories in the United States, did not Only Elon Musk sends employees to attend the Waic Forum.
Many Western visitors were surprised to know the amount of conversation about artificial intelligence in China revolving around safety regulations. “You can literally attend safety of artificial intelligence in the past seven days. This was not the case with some other world summits of artificial intelligence,” Brian Tsi, founder of the Institute for Safety Research in Artificial Intelligence in Beijing told me. Earlier this week, Concordia AI hosted a day safety forum in Shanghai with famous researchers such as Stewart Russell and Yoshua Benjio.
Switching places
By comparing the artificial intelligence plan in China with the Trump plan, the two countries seem to have transformed positions. When Chinese companies began developing advanced artificial intelligence models, many observers believed that they would hinder the government’s control requirements. Now, American leaders say they want to guarantee “local artificial intelligence models” to obtain an objective fact “, an endeavor, as my colleague Stephen Levy books Last week BackChannel The newsletter is “a blatant exercise in ideological bias from top to bottom.” Meanwhile, you read the artificial intelligence plan in China, such as a global statement: recommends that the United Nations help lead international artificial intelligence efforts and suggest that governments have an important role that they play in organizing technology.
Although their governments are completely different, when it comes to the integrity of artificial intelligence, people in China and the United States are concerned about many things themselves: hallucinogenic model, discrimination, existential risks, weaknesses on cybersecurity, etc. Because the United States and China develop the models of Amnesty International Borders “similar to the same architecture and the use of laws with similar laws. This also means that academic research on the integrity of artificial intelligence is converging in the two countries, including areas such as developmental supervision (how human beings can monitor artificial intelligence models with other models of artificial intelligence) and develop standards of safety testable testing.
https://media.wired.com/photos/688b82d45f25ab6760edc0c5/191:100/w_1280,c_limit/AICHINA.png
Source link