Read this before you trust any written symbol

Photo of author

By [email protected]


We are in the era of coding, allowing artificial intelligence models to create a symbol based on the developer’s router. Unfortunately, under the cap, feelings are bad. According to a recent report It was published by the Veracode Data Security Company, about half of all the code created from artificial intelligence containing safety defects.

Veracode cost more than 100 different linguistic styles with a completion of 80 separate coding tasks, from using different coding languages to building different types of applications. According to the report, every task had known the potential weaknesses, which means that models could complete each challenge in a safe or unsafe way. The results were not completely inspiring if security was the maximum priority for you, as only 55 % of the tasks were finally completed to create a “safe” code.

Now, one thing will be if these security gaps are small defects that can be easily corrected or mitigated. But they are often large holes. 45 % of the software instructions that failed to examine security produced a security vulnerability that was part of The 10 best application safety project opens all over the world Security gaps – Issues such as controlling broken access, encryption failure, and data integration failure. Basically, the output contains great problems enough that you do not want to rotate and push them directly, unless you are looking to penetrate.

Perhaps the most interesting result of the study, however, it is not simply that artificial intelligence models regularly produce an insecure symbol. It does not seem to be improving. Although Syntax has improved significantly over the past two years, with LLMS produced a symbol capable of almost all the time now, the aforementioned code safety has been fixed all the time. Even the latest and largest models fail to create a significantly safer symbol.

The fact that the foundation line for the safe output of the symbol created from artificial intelligence does not improve is a problem, because the use of artificial intelligence in programming is Get more popularAnd the surface area of the attack is increasing. Earlier this month, 404 media It was reported how an amazon artificial coding agent was able to delete computers that were used by injecting harmful software instructions with hidden instructions in the GitHub warehouse of the tool.

Meanwhile, when artificial intelligence agents become more common, as well Agents who are able to break the same code. recently research From the University of California, Berkeley found that artificial intelligence models have become very good in identifying exploitation errors in the code. So artificial intelligence models are constantly generating an insecure symbol, and other artificial intelligence models really increase in the discovery and exploitation of these weaknesses. This is fine.



https://gizmodo.com/app/uploads/2023/02/79c7c32e6f6f67dd13394c75687a623b.jpg

Source link

Leave a Comment