The United Kingdom “Salama” falls from the body of artificial intelligence, which is now called the Institute of Artificial Intelligence Security, devotes the memorandum of understanding with anthropology

Photo of author

By [email protected]


The UK government wants to set a difficult axis in strengthening its economy and industries with artificial intelligence, and as part of that, it burns an institution that established it slightly more than a year for a completely different purpose. The Ministry of Science, Industry and Technology announced today that it will restore the Institute of Artificial Intelligence Institute to the “Artificial Intelligence Security Institute”. However, it will turn from exploring areas such as existential risks and bias in large language models, to focus on cybersecurity, specifically “enhancing protection against the risks that AI poses for national security and crime.”

Besides, the government has also announced a new partnership with Antarbur. No fixed services have been announced, but the Memorandum of Understanding indicates that the two “explores” using the Claude assistant from artificial intelligence in public services; It will aim to contribute to work in scientific research and economic modeling. At the Institute of Artificial Intelligence Security, it will provide tools to assess the capabilities of artificial intelligence in the context of determining the security risks.

“Artificial intelligence has the ability to transform how governments serve their citizens,” the co -founder and chief executive of the anthropologist Dario Amani said in a statement. “We look forward to exploring how Claud at AI HotHROPIC can help British government agencies to enhance public services, with the aim of discovering new ways to make vital information and services more efficient and available to population in the United Kingdom.”

Antarbur is the only company announced today – coincides with a week of artificial intelligence activities in Munich and Paris – but it is not the only one that works with the government. A series of new tools that were disclosed in Jannai have been operated by Openai. (At that time, Peter Kyle, Minister of Foreign Affairs of Technology, said that the government intends to work with various foundational artificial intelligence companies, and this is what the humanitarian deal proves.)

The government’s replacement of the Institute of Artificial Intelligence Establishment-launched A little more than a year ago With a lot of noise – Amnesty International’s security should not be a big surprise.

When the newly proven work government announced AI’s heavy plan for change in JanuaryIt was noted that the phrase “safety”, “damage”, “existential” and “threat” did not appear at all in the document.

This censorship was not. The government’s plan is to start investing in a more updated economy, using technology and defining artificial intelligence to do so. She wants to work closely with Big Tech, and she also wants to build her local large techniques. The main messages that were promoted were development, artificial intelligence, and more development. Civil employees will have their AI’s assistant assistantHumphrey“They are encouraged to share data and use artificial intelligence in other areas to accelerate how they work. Consumers will be Get digital wallets For their government documents, Chatbots.

Are artificial intelligence safety problems solved? Not exactly, but it seems that the message is that it cannot be considered at the expense of progress.

The government claimed that although the name changed, the song will remain the same.

“The changes I announced today represent the next logical step in how we deal with the development of responsible artificial intelligence – help us unleash the prosecution and develop the economy as part of our plan for change,” Kyle said in a statement. “The work of the Institute of Artificial Intelligence will not change, but this renewed focus will guarantee that our citizens – and our allies – are protected from those looking to use artificial intelligence against our institutions, democratic values ​​and the way of life.”

“The institute’s focus from the beginning was on security and we have built a team of scientists focusing on assessing the serious risks of the public,” added Ian Hograth, who remains the head of the institute. “Our new team of criminal misuse and the depth of partnership with the National Security Society represents the next stage of addressing these risks.”

Beyond that, it seems that the priorities have changed about the importance of “artificial intelligence integrity”. The biggest risks that the Institute of Artificial Intelligence Intelligence in the United States is currently thinking is that it will be dismantling. US Vice President JD Vance Telegraph earlier this week during his speech in Paris.

Techcrunch has a news message focused on artificial intelligence! Subscribe here To get it in your inbox every Wednesday.



https://techcrunch.com/wp-content/uploads/2024/01/GettyImages-1483860763-e1706702297447.jpg?resize=1200,677

Source link

Leave a Comment