The latest generation Artificial intelligence models are not stand -alone Chatbots that generate the text– In this, it can be easily connected to your data to give answers to your questions. Openai’s Chatgpt can be linked To your inbox in Gmail, it is allowed to inspect your Github icon, or find dates in your Microsoft calendar. But these links have the ability to attack – and the researchers have shown that it may take just a single “poisoned” document to do so.
New results were revealed by security researchers Michael Parkori and Tamir Ishi Sharabat, which was revealed at the Black Hacker Conference in Las Vegas today, how the weak conductors in Openai allowed to extract sensitive information from the Google Drive account using A Google Drive using using Indirect injection attack. In a demonstration of the attack, AgentflayerBargury explains how it was possible to extract the secrets of the developers, in the form of API keys, which were stored in the automatic display engine account.
Weakness highlights how artificial intelligence models are connected to external systems and sharing more data through them to increase the surface of the potential attack of malicious infiltrators and may double the ways in which security gaps can be provided.
“There is nothing that the user needs to do to decline, and there is nothing the user needs to do in order to get out of the data,” says Bargury, CTO at the security company Zenity, for Wire. “We have shown that this is completely clicking,” Bargori says.
Openai did not immediately respond to WIRED’s request to comment weak in the conductors. The company provided conductors for Chatgpt as a trial feature earlier this year, and Site lists At least 17 different services can be linked to their accounts. She says the system allows you to “bring your tools and data to ChatGPT” and “search files, withdraw direct data, and the reference content in the chat directly.”
BARGURY says he had reported the results of Openai earlier this year and that the company has quickly provided a reduction to prevent the technology that it used to extract data via conductors. The way the attack works only means extracting a limited amount of data simultaneously – the documents full of the attack cannot be removed.
“While this problem is not related to Google, it explains why the importance of developing strong protection against rapid injection attacks,” says Andy Wen, Director of Security Products Department at Google Workspace. Recently enhanced Amnesty International Security Trainers.
https://media.wired.com/photos/689389277e5b686b4b8f1a86/191:100/w_1280,c_limit/openai-google-drive-sec-2225304360.jpg
Source link