Inside the unpublished report of the Biden Administration on the safety of artificial intelligence

Photo of author

By [email protected]


At the Erlington Computer Security Conference, Virginia, last October, a few dozen researchers of artificial intelligence participated in a unique exercise in “Red Teaming”, or the stress test is an advanced language model and others artificial intelligence Systems. Over two days, the 139 new teams have set a new way to get miscarriage systems, including by generating wrong information or personal data leakage. And most importantly, they showed shortcomings in the standard of a new American government designed to help companies test artificial intelligence systems.

The National Institute for Standards and Technology (NIST) has not published a report that explains in detail the exercise, which was completed at the end of the Biden administration. The document may have helped companies evaluate their artificial intelligence systems, but the familiar sources of the situation, which spoke on the condition of anonymity, says it is one of many NIST intelligence documents that have not been published for fear of engaging with the incoming administration.

“It has become very difficult, even under (President Joe), Biden, to remove any papers.” “It seemed like climate change research or cigarette research.”

NIST nor the Trade Administration did not respond to the suspension request.

Before taking office, President Donald Trump indicated that he intends to reverse Biden executive of artificial intelligence. Trump administration since then Experts go away from studying Issues such as algorithm or fairness in artificial intelligence systems. the Artificial Intelligence Action Plan It was explicitly released in July to a review of NIST risk management framework “to eliminate signals to wrong information, diversity, fairness, inclusion and climate change.”

Ironically, although Trump International Action International also calls for the type of exercise covered by the unpublished report. He calls for many agencies alongside NIST to “coordinate the Hackathon Ai initiative to seek the best and shiny of the American Academy to test the artificial intelligence systems of transparency, effectiveness, use and security control.”

The red capture event has been organized through the NIST program for assessing the risks and effects of artificial intelligence (AI) in cooperation with Humane Intelligence, a company specialized in testing artificial intelligence systems that witnessed attack teams on the teams. The event took place at the Applied Automated Information Security Conference (CAMLIS).

The Camlis Red Teaming team’s report describes efforts to explore many advanced AI systems including Llama, Meta Open Source Language Model; Anote, platform for building and refining artificial intelligence models; A system that prevents attacks on artificial intelligence systems from strong intelligence, a company that CISCO has acquired; And the platform for generating deities of artificial intelligence from the company Synthesia. Representatives from each of the companies also participated in the exercise.

Participants were asked to use NIST AI 600-1 A frame for assessing artificial intelligence tools. The frame covers risk categories including generating wrong information or cybersecurity attacks, leaking user information or important information about relevant artificial intelligence systems, and the possibility that users become emotionally linked to artificial intelligence tools.

The researchers discovered various tricks to obtain the models and tools that were tested to jump on their handrails, generate wrong information, leakage of personal data, and help formulate cybersecurity attacks. The report says that the participants saw that some NIST elements were more useful than others. The report says that some NIST risk categories have not been defined enough to be useful in practice.



https://media.wired.com/photos/6883c221464958621688abf6/191:100/w_1280,c_limit/AI-Lab-White-House-Buried-Work-Red-Teaming-Frontier-AI-Models-Business-2224363689.jpg

Source link

Leave a Comment