Nvidia is launching three new NIM microservices, or small independent services that are part of larger applications, to help organizations provide additional control and safety measures for their AI customers.
One of NIM’s new services targets content integrity and works to prevent an AI agent from generating harmful or biased output. Another service keeps conversations focused on approved topics only, while a third new service helps prevent an AI agent from attempting to jailbreak or remove software restrictions.
These three new NIM services are part of Nvidia Nemo GuardrailNvidia’s current open source suite of software tools and microservices aimed at helping companies improve their AI applications.
“By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when more general global policies and protections are in place – since a one-size-fits-all approach does not properly secure and control complex agent AI workflows,” the press release He said.
It seems that AI companies may be starting to realize that convincing companies to adopt their AI agent technology will not be as simple as they initially thought. While people like Salesforce CEO Marc Benioff recently anticipation There will be over a billion agents escaping Salesforce alone in the next 12 months, and the reality will likely look a little different.
A recent study from Deloitte anticipation About 25% of companies are either already using AI agents or expect to do so in 2025. The report also predicted that by 2027, about half of companies will be using agents. This shows that while companies are clearly interested in AI agents, they are not adopting AI technology at the same pace as AI innovation.
Nvidia is likely hoping that such initiatives will make adopting AI agents seem safer and less experimental. Time will tell if this is actually true.
https://techcrunch.com/wp-content/uploads/2023/08/GettyImages-1598978057.jpg?w=1024
Source link