From drones that offer medical supplies to digital assistants who perform daily tasks, male -powered systems have become increasingly included in daily life. Creators of these innovations are transformative benefits. For some people, prevailing applications such as ChatGPT and Claud can look like magic. But these systems are not magic, nor are they guaranteed – they can fail regularly to work as intended.
Artificial intelligence systems can be disrupted due to technical design defects or biased training data. It can also suffer from weaknesses in their symbol, which can be exploited by stirring infiltrators. Isolating the reason for the failure of artificial intelligence is necessary to repair the system.
But artificial intelligence systems are usually not transparent, even for their origin. The challenge is how to investigate artificial intelligence systems after their failure or a victim of the attack. There are techniques to examine artificial intelligence systems, but they require access to the internal data of the artificial intelligence system. This access is not guaranteed, especially for the criminal investigators who were called to determine the reason for the failure of the royal artificial intelligence system, which makes the investigation impossible.
we Computer scientists Those who study Digital forensic medicine. Our team at the Georgia Institute of Technology built a system, Amnesty International PsychiatryOr AIP, can re -create the scenario in which artificial intelligence has failed to determine the error that occurred. The system addresses forensic medicine challenges from artificial intelligence by recovering and “reactivating” the suspected artificial intelligence model so that it can be used systematically.
Uncertainty from artificial intelligence
Imagine that a self -driving car deviates from the road without any reason that can be easily distinguished and then destroyed. Records and sensor data may indicate that the defective camera has caused abuse of artificial intelligence of the road brand as a slope leadership. After missing failure in the task like A separate vehicle crashInvestigators need to determine the exact error.
Did the accident arise from a harmful attack on artificial intelligence? In this default case, the camera’s error can be due to weak security or errors in its exploited programs by the infiltrator. If the researchers find such weakness, they must determine whether this caused the accident. But making this design not easy.
Although there are criminal methods to restore some evidence of drone failures, independent vehicles and other cyber physical systems, no one can capture the evidence required for a complete investigation of artificial intelligence in this system. Advanced ais can Decision -making update Thus the clues-constantly, makes it impossible to investigate the latest models with the current methods.
https://www.youtube.com/watch?
Amnesty International Pathology
Amnesty International Psychiatry applies a series of forensic algorithms to isolate data behind decisions to the artificial intelligence system. Then these pieces are reassembled in a functional model that leads to the same form of the original model. Osmen can “restore” artificial intelligence in an control environment and test them with harmful inputs to see if it shows harmful or hidden behaviors.
Psychiatry takes Amnesty International as inputs MemoryA snapshot of the laptops and laptops when artificial intelligence was working. The memory image at the time of the crash in the independent car scenario contains decisive evidence about the internal state and decision -making processes that control the car. With ai psychiatry, investigators can now raise the exact artificial intelligence model of memory, dissect its parts and house, and download the model to a safe environment for the test.
Our team tested psychiatry of artificial intelligence in 30 model of artificial intelligence, 24 of which were intentionally.rear door“To produce incorrect results under specific players. The system was successfully able to recover each model, including models, including common models in real world scenarios such as the street mark recognition in independent vehicles.
To date, our tests indicate that AI Psychiatry can effectively solve the digital puzzle behind a failure such as an independent car accident that would have previously left more questions than answers. And if this does not find a loophole in the AI system, then the psychiatry of Amnesty International allows investigators to exclude artificial intelligence and search for other reasons such as the defective camera.
Not only for independent vehicles
The main algorithm of AI Psychiatry in general: it focuses on comprehensive components that all artificial intelligence models must make decisions. This makes our approach easily extended to any models of Amnesty International that use famous artificial intelligence development frameworks. Anyone working to investigate artificial intelligence can use our system to assess a model without prior knowledge of accurate architecture.
Whether artificial intelligence is the robot that provides product recommendations or a system that guides independent drones, psychiatry can regain and restore Amnesty International to analyze. Amnesty International Psychiatry A completely open source For any investigator to use it.
Amnesty International Psychiatry can serve as a valuable tool for conducting artificial intelligence systems before problems appear. With government agencies from law enforcement to child protection services that integrate artificial intelligence systems into their workflow, artificial intelligence audits have become an increasingly common condition at the state level. With a tool like AI Psychiatry on hand, auditors can apply the coherent forensic methodology through various publishing platforms and processes.
In the long run, this will pay meaningful profits for the creator of artificial intelligence systems and everyone who is affected by the tasks they perform.
David OygenblikPhD student in electrical engineering and computer, Georgia Institute of Technology and Brendan SallaforgioAssociate Professor of Cyber Security and Privacy, Electricity Engineering and Computer Engineering, Georgia Institute of Technology
This article has been republished from Conversation Under the creative public license. Read The original article.

https://gizmodo.com/app/uploads/2024/11/GettyImages-2154701385.jpg
Source link