Who should be blamed when artificial intelligence factors reach?

Photo of author

By [email protected]


On the past General, veteran software engineer Jay Brakash Thakor Nights and weekends spent the initial models Artificial intelligence agents This can be required in the near future meals and mobile applications almost completely engineering. His agents, despite his amazing ability, have been surprisingly on new legal questions waiting for companies trying to benefit from the most new technology in Silicon Valley.

Agents Amnesty International Programs This can be often independently independent, allowing companies to automate tasks such as answering customer questions or paying bills. While similar ChatGPT and Chatbots can formulate emails or bills analysis on demand, Microsoft and other technology giants expect agents to deal with agents More complicated jobs– And most importantly, do that With a little human oversight.

More than the technology industry Ambitious plans The involvement of multiple agents, with dozens of agents one day cooperating The entire workforce. For companies, the interest is clear: saving on time and employment costs. Indeed, the demand for technology increases. Gartner Technology Market Researcher Estimates Agenic AI will solve 80 percent of the common customer service inquiries by 2029. Fiverer, a service in which companies can reserve independent programmers, Reports The search for “AI Agent” has increased by 18,347 percent in recent months.

Thakur, the programmer who is studying self in California, wanted to be at the forefront of the emerging field. His daily function in Microsoft is not related to agents, but he was manipulating him automaticMicrosoft Open Source for construction agents, as it was working in Amazon again in 2024. Thakur says he has developed multi -kwaks first models using Autogen with only a programming of programming. Last week, Amazon launched a similar agent to develop a STRANDS; Google It offers what the agent development group calls.

Since agents aim to act independently, the question of those who bear responsibility when their mistakes cause the biggest financial damage. It is believed that the blaming when agents make mistakes from different companies in one system, may become controversial. He compared the challenge of reviewing errors from various agents to rebuild a conversation based on the notes of different people. “It is often impossible to determine responsibility,” said Thakur.

Joseph firefighter, the chief legal advisor at Openai, said on the stage at a modern legal conference hosted by the San Francisco Media Resources Center that the affected parties tend to follow the deepest pockets. This means that companies like him will need to be ready to take some responsibility when agents cause harm – even when the child wanders with an agent who may be responsible. (If this person is wrong, it is likely that it is not worthwhile money, then thinking goes). The firefighter said: “I don’t think anyone hopes to reach the consumer sitting in their mother’s basement on the computer.” Insurance I started to offer coverage AI chatbot issues to help companies cover accident costs.

Onion rings

Thakur’s experiences included to link him between factors in systems that require the lowest possible human intervention. One of the projects followed by replacing his colleagues was software developers with two agents. One of them has been trained to search for specialized tools needed to make applications, and the other is to summarize their own use policies. In the future, the third agent can use the specific tools and follow the policies to develop a completely new application, says Thakor.

When Thakur put his initial model for the test, the search agent found a tool, according to the website, “supports unlimited requests per minute for the institution’s users” (which means that customers who have high wages can rely on as much as they want). But in an attempt to distract the main information, the crucial qualified summary agent decreased for “the minute for the Users of the Foundation.” I said it by mistake that the coding agent, who was not qualified as a user of the institution, could write an unlimited request for external service. Because this was a test, there was no harm. If this happens in real life, the broken guidance may have led to the collapse of the entire system unexpectedly.



https://media.wired.com/photos/682d0ea94ae5af867ecf06a8/191:100/w_1280,c_limit/Who-to-Blame-AI-Business.jpg

Source link

Leave a Comment