Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
AI Enterprise teams face an expensive dilemma: create advanced agent systems that you close in a large language model (LLM), or to constantly rewrite claims and data pipelines while switching between the models. Financial technology giant Intuit This problem has been solved by hacking that can reshape how organizations deal with a multi -model AI structure.
Like many institutions, Intuit has built artificial intelligence solutions using multiple large language models (LLMS). During the past few years, Intuit’s Artificial Intelligence Employment System (Genos) The platform has been steadily developed, providing advanced capabilities for company developers and final users, such as Intuit help. The company has increasingly focused on Amnesty International Undersecretary Work It had a measurable effect on Intuit products, which include Quickbooks, Credit Karma and Turbotax.
Intuit is now expanding Genos with a series of updates aimed at improving productivity and efficiency of artificial intelligence in general. Improvements include the start of the worker’s starting tool group, which enabled 900 internal developers to build hundreds of artificial intelligence agents within five weeks. The company is also the first to call a “smart data perception” that exceeds the traditional generation approaches that have been activated.
Perhaps it is more effective that Intuit has solved one of the most thorny AI’s problems: How to create agents that work smoothly through multiple large language models without forcing developers to rewrite claims for each model.
“The main problem is that when you write a single model, the AT Form, you tend to think about how to improve the model A, how it was built and what you need to do and when you need to switch to the B form,” tell Ashok Srivastava, the chief data employee in Intuit Venturebeat. “The question is, do you have to rewrite it? In the past, one will have to rewrite it.”
How do genetic algorithms eliminate the seller and reduce the costs of operating artificial intelligence
Organizations have found multiple ways to use different LLMS in production. One approach is the use of llm forms Model guidance Technology, which uses llm smaller to determine Where are you sending query.
Intuit’s amazing improvement service follows a different approach. It is not necessarily about finding the best inquiry model, but rather to improve the directed to any number of different LLMS. The system uses genetic algorithms to create and test the variables automatically.
“The way in which the fast translation service works is that it already has genetic algorithms in its component, and that these genetic algorithms actually create variables of the claim and then do internal improvement,” explained Srivastava. “They start with a basic set, create a variable, and test the alternative, if this alternative is actually effective, he says, I will create that new rule and then continue to improve.”
This approach provides immediate operational benefits that go beyond comfort. The system provides automatic failure possibilities for the institutions concerned with the seller lock or the reliability of the service.
“If you are using a specific model, and for any reason in which this model has declined, we can translate it so that we can use a new model that may actually be,” indicated by Srivastava.
Beyond a rag: realizing the smart data of the institution’s data
Although the amazing improvement solves the challenge of the transmission of the model, Intuit engine engineers have identified the critical bottle: the time and experience needed to integrate artificial intelligence with the structure of the complex institution data.
Intuit has developed what it calls a “smart data perception” that addresses the challenges of the most advanced data integration. The approach goes beyond the recovery of simple documents and the augmented generation (RAG).
For example, if the institution gets a third -party data set with a specific scheme, the organization is largely unaware, the perception layer can help. He pointed out that the perception layer understands the original scheme as well as the target scheme and how to appoint it.
This ability deals with the scenarios of the institution in the real world, where the data comes from multiple sources with different structures. The system can automatically define the context that matching the simple scheme will lose it.
Bene
The smart data perception layer allows the completion of advanced data, but the Intuit competitive feature goes beyond gym artificial intelligence to how these capabilities are collected with proven predictive analyzes.
The company runs what it calls a “Super Model” – a group system that combines multiple prediction models with profound learning methods, as well as advanced recommendation engines.
Srivastava explained that the model is a supervisory model looking at all basic recommendation systems. It takes into account the success of these recommendations in experiences and in this field, and based on all these data, a group approach is taken to submit the final recommendation. This mixed approach provides predictive capabilities that cannot match LLM Pure.
A mixture of Aicent AIC with predictions will help enable institutions to look at future and know what can happen, for example, with a problem related to cash flow. The agent can then suggest changes that can be made now with the user’s permission to help prevent future problems.
The effects of the AI strategy for the institution
Intuit approach provides many strategic lessons for institutions looking to lead AI.
First, the investment in the LLM-Agenostic structure can provide from the beginning great operational flexibility and risk alleviation. The genetic algorithm approach to improving amazing improvement can be a special value for institutions that operate through multiple cloud service providers or those concerned with the availability of the model.
Second, the focus indicates the combination of traditional artificial intelligence capabilities with obstetric artificial intelligence that institutions should not give up the current prediction systems and recommendations when building the agent structure. Instead, they must search for ways to integrate these capabilities into the most advanced thinking systems.
This news means that the advanced agent application bar is raised to institutions that adopt artificial intelligence at a later time in the course. Institutions should think beyond simple chat programs or document retrieval systems to remain competitive, with a focus instead on the multi -agent structure that can deal with complex workflow and predictive analyzes.
Fast food in technical decision makers is that AI applications for successful institutions require investments in advanced infrastructure, and not only API calls to basic models. Intuit’s Genos explains that the competitive advantage comes from the ability of institutions to integrate artificial intelligence capabilities with their current data and commercial operations.
https://venturebeat.com/wp-content/uploads/2025/06/AI-working-alongside-humans-SMK.png?w=1024?w=1200&strip=all
Source link