AI models can be surprisingly stealable, provided they can somehow recognize the model’s electromagnetic signature. While repeatedly emphasizing that they do not actually want to help people attack neural networks, researchers at North Carolina State University described such a technique in a detailed article: New paper. All they needed was an electromagnetic probe, several pre-trained open source AI models, and a Google Edge Tensor Processing Unit (TPU). Their method entails analyzing electromagnetic radiation while the TPU chip is actively operating.
“Building and training a neural network is very expensive,” said the study’s lead author, an NC State Ph.D. Student Ashley Kurian is on a call with Gizmodo. “It’s intellectual property that the company owns, and it takes a lot of time and computational resources. For example, ChatGPT — it’s made up of billions of parameters, and it’s kind of a secret. When someone steals it, ChatGPT becomes theirs. You know, they don’t have to pay for it, They can also sell it.
Theft is already a major concern in the world of AI. However, it is usually the opposite, where AI developers train their models on copyrighted works without obtaining permission from human creators. This overwhelming pattern is stir Lawsuits And even tools to Helping artists fight By “poisoning” the generators of art.
“The electromagnetic data from the sensor essentially gives us a ‘signature’ of the AI’s processing behavior,” Kurian explained in an article. statementHe described it as “the easy part.” But in order to decipher the model’s hyperparameters – its structure and specific details – they had to compare the electromagnetic field data with data captured while running other AI models on the same type of chip.
By doing this, “they were able to identify the specific structure and properties – known as layer details – that we would need to make a copy of the AI model,” Kurian explained, adding that they could do this with “up to 99.91% accuracy.” To achieve this, the researchers had physical access to the chip to examine and run other models. They also worked directly with Google to help the company determine the vulnerability of its chips to attack.
Kurian speculated that capturing models running on smartphones, for example, would also be possible, but their ultra-small design would make monitoring electromagnetic signals more difficult.
“Side-channel attacks on edge devices are nothing new,” Mehmet Sencan, a security researcher at the nonprofit AI standards company Atlas Computing, told Gizmodo. But this special technique to “extract the hyperparameters of the entire model geometry is important.” Since AI machines “perform inference in plain text, anyone deploying their models on the edge or in any physically unsecured server has to assume that their constructs can be extracted through extensive investigation,” Sinkan explained.
https://gizmodo.com/app/uploads/2024/12/ai-neural-network.jpg
Source link