Singapore’s vision of artificial intelligence safety clogging the division of the United States of China

Photo of author

By [email protected]


government Singapore has been released a plan Today for global cooperation on artificial intelligence Safety after a meeting of artificial intelligence researchers from the United States, China and Europe. The document places a common vision to work on the integrity of artificial intelligence through international cooperation instead of competition.

“Singapore is one of the few countries on this planet that is compatible with east and West alike,” says Max Tegark, the scientist at the Massachusetts Institute of Technology to hold a meeting of artificial intelligence last month. “They know that they will not build (artificial public intelligence) themselves – they will do it for them – so it is in their interest that there are the countries that will build them talking to each other.”

Of course, the countries that believed that the countries probably adopt AGI are the United States and China – however, those countries seem more determined to overcome each other than working together. In January, after the start of the Chinese Deepseek She released an advanced modelPresident Trump described it as “an invitation to wake up to our industries” and said that the United States needs to be “a laser focusing on competing to win.”

Singapore’s consensus on the priorities of safety research from artificial intelligence calls for cooperation in three main areas: studying the risks posed by Frontier AI models, exploring safer ways to build these models, and developing ways to control the behavior of the most advanced artificial intelligence systems.

The consensus was developed in a meeting held on April 26 alongside the International Learning Representation Conference (ICLR), a major event in Amnesty International in Singapore this year.

The researchers from Openai, Anthropic, Google DeepMind, XAI, and Meta were all the integrity of artificial intelligence, as academics did from institutions including MIT, Stanford, Tsinghua and the Chinese Academy of Sciences. Experts from artificial intelligence safety institutes also participated in the United States, the United Kingdom, France, Canada, China, Japan and Korea.

“In the geopolitical retail era, this comprehensive synthesis of advanced research on the integrity of artificial intelligence is a promising sign that the global community meets with a common commitment to forming a safer future than artificial intelligence,” Xue Lan, Dean of Tsinghua University, said in a statement.

The development of the increasing artificial intelligence models, some of which caused amazing capabilities, caused researchers to worry about a set of risks. While some focus on the short -term damage, including the resulting problems Biasic artificial intelligence systems Or possibility Criminals to use technologyA large number believes that artificial intelligence may pose an existential threat to humanity as it begins to overcome humans through more areas. These researchers, sometimes referred to as “Domers AI”, are concerned that models may be deceived and deal with humans to follow their own goals.

Artificial intelligence capabilities also led to an update on the arms race between the United States, China and other powerful countries. This technology is seen in the political circles as it is crucial to economic prosperity and military domination, and many governments have sought to obtain visions and regulations that govern how they should be developed.



https://media.wired.com/photos/681beb0c8360b4ea2ea70b79/191:100/w_1280,c_limit/business_ai_safety_singapore_us_china.jpg

Source link

Leave a Comment