Irregular, independent and penetration: the dilemma of the artificial intelligence factor has not seen by anyone coming

Photo of author

By [email protected]


This article is part of the special number of Venturebeat, “PlayBook Cyber ​​Resistance: Mobility in the era of the new threat.” Read more of this A special case here.

AI Al -Tulaidi raises interesting security questions, and as institutions move to the agents, these safety issues increase.

When artificial intelligence agents enter the workflow, they must be able to access the data and sensitive documents to do their job – which makes them a great risk to many Institutions with security thinking.

Nicole Carnean, Vice President of Strategic Niper AI, said in AI at AT Darktrace. “But the effects and damages of these gaps can be greater due to the increase in the size of the contact points and the interfaces contained by multiple agents.”

Why do artificial intelligence agents form such high security risks

Artificial intelligence agents – Or independent artificial intelligence that carries out procedures on behalf of users – has become very common in the past few months only. Ideally, it can be connected to a boring function and can perform any task, from something simple such as finding information based on internal documents to providing recommendations to human employees.

But they are an interesting problem for institutional security professionals: they must access the data that make it effective, without opening or sending private information by mistake. With agents performing more tasks that employees are used to doing, the issue of accuracy and accountability plays their role in playing, and they may become a headache of safety and compliance teams.

Chris Pitz, CISO from AWSTell Venturebeat that the generation -centered generation (RAG) and customer use are a “great and interesting angle” in security.

“The organizations will need to think about the form of virtual participation in its institution, because the agent will find by searching for anything that supports his mission.” “If you encounter the documents, you should consider the virtual participation policy in your organization.”

Security specialists should ask whether the agents should be considered employees or digital programs. How much access should the agents have? How should you recognize it?

Weaknesses of artificial intelligence agent

Gen Ai made many institutions more aware Possible weaknessesBut agents can open them for more issues.

“The attacks that we see today affect the systems of one agent, such as poisoning with data, injections, or social engineering to influence the behavior of the worker, all weaknesses can be in a multi -agent system.”

Institutions should pay attention to what agents can access to ensure that data is strong.

Betz indicated that many Security issues The surrounding human employee’s arrival can extend to agents. Therefore, “is due to making sure that people can reach the right things and only the right things.” He added that when it comes to the agent of the work of multiple steps, “each of these stages is an opportunity for the infiltrators.

Give the agents an identity

One answer can be the issuance of specific access identities to the agents.

The world of models where problems throughout the days are “a world in which we need to think more about the registration of the agent’s identity as well as the identity of the human being responsible for the agent’s request everywhere in our organization,” said Jason Clinton, CISO of the model provider man.

The identification of human employees is something that institutions do for a very long time. They have specific functions; They have an email address they use to sign accounts and track them by IT officials; They have physical laptops with accounts that can be lock. They get individual permission to access some data.

Difficulty in this type of employee arrival and identification can be published in agents.

Betz and Clinton believe that this process can push institution leaders to rethink how information is to access users. Even organizations can lead to repairing the functioning of their work.

“The use of a work agent actually provides you with an opportunity to link use cases for each step along the way to the data he needs as part of RAG, but only the data he needs,” Pitts said.

He added that Agentic workflows “can help address some of these concerns about excessive participation”, because companies must consider the data that are accessed to complete the procedures. “There is no reason to prevent the first step to reach the same data that seven needs need.”

Old check is not enough

Institutions can also search for an agent platform that allows them to take a peek on how agents work. For example, Don Schuerman, Cto of Skyflow Automation Provider BigaHe said his company is helping to guarantee the security agent by informing the user what the agent does.

“Our primary system is already used to scrutinize the work that humans are doing, so we can also review every step by an agent,” Chorman told Venturebeat.

The latest PEGA products, AgentxHuman users are allowed to switch to a screen that defines the steps taken by the agent. Users can know where the agent is along the timetable for the workflow and get a reading of its specified actions.

Checking, schedules, and identification of identity are not ideal solutions to security issues provided by artificial intelligence agents. But with institutions exploring the possibilities of agents and begins to publish them, more targeted answers can appear as AI’s experience continues.



https://venturebeat.com/wp-content/uploads/2025/02/upscalemedia-transformed-1.jpeg?w=1024?w=1200&strip=all

Source link

Leave a Comment