Computer code created by artificial intelligence References to the references to the non -existing third -party libraries, which creates a golden opportunity for the supply chain attacks that poison the legitimate programs with malicious packages that can steal data, vegetarian background, implement other nefarious procedures, and between recently published research.
The study, which used 16 large linguistic models used widely to create 576,000 symbol samples, found that 440,000 dependencies of the package that contained it were “hallucinations”, which means that they are not present. Hilan open source models, with 21 percent of the dependencies that connect to the non -existing libraries. Dependence is a basic symbol component that requires a separate part of the software instructions to work properly. Dependency dependence on developers rescues the troubles of rewriting a symbol, which is an essential part of the modern software supply chain.
The hallucinations package memories of the past
These non -existing dependencies are a threat to the software supply chain by exacerbating the so -called subordination confusion attacks. These attacks work by causing a package of software to reach the wrong dependency of the component, for example by publishing a harmful package and giving it the same name as Sharia but with the subsequent edition. He will choose programs that depend on the package, in some cases, the harmful version instead of the legal version because the first appears to be more modern.
Also known as the confusion package, this was the form of the attack I showed for the first time In 2021, the exploitation of the concept that implemented fake code on networks belonging to some of the largest companies on this planet, Apple, Microsoft and Tesla. It is one type of technology used in the software supply series, which aims to poison programs in its source in an attempt to injure all users in the direction of the river.
“Once the attacker has published a bundle under the concrete name, which contains some harmful code, they depend on the model that indicates this name for reassuring users,” Joseph SPRACKLEN, University of Texas in San Antonio PhD. Student and main researcher, tell ARS via email. “If the user trusted the LLM output and installs the package without checking it carefully, the attacker’s load, hidden in the harmful package, will be implemented on the user system.”
In artificial intelligence, hallucinations occur when LLM produces incorrect, illogical or unrelated outputs. Halousa has long been subjected to the deterioration of its usefulness and worthy of confidence and has proven difficulty in predicting and treating. in paper It is scheduled to be presented at the 2025 Usenix Security Symposium, and they were called the “Halassa Al -Hazm” phenomenon.
For the study, the researchers conducted 30 tests, 16 in Python and 14 programming language in Javascript, which produced 19200 symbol samples for each test, to become a total of 576000 symbol samples. Of the 2.23 million references of a package contained in those samples, 440,445, or 19.7 percent indicated the beams that were not present. Among these hallucinations of 440,445 packages, had 205,474 unique packages.
One of the things that makes the beam hallucinations may be useful in the supply chain attacks is that 43 percent of hallucinations have been repeated on 10 queries. “In addition,” researchers wrote, “58 percent of the time, the hallucinations package is repeated more than once in 10 repetitions, indicating that the majority of hallucinations are not just repetitive random errors that continue to repeat multiple.
https://media.wired.com/photos/68126d295a838ce371cf4263/191:100/w_1280,c_limit/ai-attacks-sec-890154980.jpg
Source link