Join the event that the leaders of the institutions have been trusted for nearly two decades. VB Transform combines people who build AI’s strategy for real institutions. Learn more
While institutions face the challenges of publishing artificial intelligence agents in critical applications, a new, more realistic model emerges that restores human control as a strategic guarantee against artificial intelligence failure.
One example MixusA platform that uses the “episode colleague” platform to make artificial intelligence agents trusted to work important.
This approach is a response to the increasing evidence that complete independent factors are a high -risk gambling.
The high cost of artificial intelligence has not been verified
problem Hallus Amnesty International It has become a tangible danger as companies explore artificial intelligence applications. In a recent incident, the symbol editor -in -law, which works of artificial intelligence, witnessed his support robot Creating a fake policy Restricting subscriptions, which raises a wave of cancellation of public customers.
Likewise, Fintech Klarna is famous A reverse path When customer service agents are replaced by artificial intelligence after recognizing this step, it led to a decrease in quality. In a more worrying case, business entrepreneurs advised in New York City Evil at illegal practicesHighlighting the risk of catastrophic compliance with unproductive factors.
These accidents are symptoms of a larger capacity gap. According to May 2025 Salesforce Search paperToday, prominent agents succeed only 58 % of time in one -step tasks and only 35 % of the time on those multiple steps, with a highlight of “a big gap between the current LLM capabilities and the multi -faceted demands for the scenarios of the institution in the real world.”
A colleague form in the episode
To fill this gap, a new approach focuses on organized human control. “Amnesty International’s agent in your direction and accordingly,” Elliot Katz, co -founder of Mixus, told Venturebeat. “But without integrated organizational supervision, it often completely creates more independent factors than they are.”
This philosophy supports the Mixus Fellow Model in the episode, which guarantees human verification directly in the workflow. For example, a large retail seller may receive weekly reports from thousands of stores that contain important operating data (for example, sales sizes, working hours, productivity rates, compensation requests from headquarters). Human analysts must spend hours manually review data and make decisions based on reasoning. With MIXUS, artificial intelligence agent automates heavy lifting, analyzing complex patterns and anomalies such as unusually high salary requests or productive values.

As for high risk decisions such as payment statements or political violations-a workflow determined by a human user as “high-risk”-the agent stops and requires human approval before follow-up. The division of labor is combined between artificial and human intelligence in the process of creating the agent.
“This approach means that humans are only involved when their experience already adds a value-usually flows 5-10 % of decisions that may have a major effect-while 90-95 % of routine tasks flow automatically,” Katz said. “You get full automation speed of standard operations, but human supervision is specifically running when context, judgment and accountability is more important.”
In a experimental show showed by the Mixus team on Venturebeat, the creation of an intuitive operation agent that can be done with ordinary text instructions. To build a fact -examination factor for correspondents, for example, the co -founder Shai Magzimof described the multi -step process in the natural language and directing a platform to include the steps of human verification with specific thresholds, such as when the claim is highly dangerous and can lead to famous damage or legal consequences.
One of the basic strengths of the platform is its integration with tools such as Google Drive, e -mail and stagnation, allowing the institution’s users to enter their data sources into workflow tasks and interact with agents directly from their communication platform, without the need to switch contexts or learn a new interface (for example, the fact -determining customer was directed to send applications approval to the email of the decision).
The capabilities of the statute integration extend to further meet the needs of the specific institution. Mixus supports Form context protocol (MCP), which enables companies to connect agents to their custom tools and a application programming interface, and avoid the need to re -invent the wheel to the current internal systems. In addition to integration of other institutions programs such as Jira and Salesforce, this allows agents to perform complex tasks and via platforms, such as checking open engineering tickets and reporting the situation to the manager on Slack.
Human supervision as a strategic double
The AI’s area is currently subject to verification of reality as companies move from experimentation to production. The consensus among many industry leaders is that humans in the episode are a practical necessity for agents to perform reliable.
The cooperative model of Mixus changes the economies of scaling artificial intelligence. Mixed that by 2030, the agent may grow 1000x and all the human supervisor will become more efficient 50x as artificial intelligence factors become more reliable. But the full need for human control will continue to grow.
“All the human supervisor runs more than artificial intelligence over time, but you still need more overall supervision with the explosion of spreading artificial intelligence through your organization,” Katz said.

For institution leaders, this means that human skills will develop rather than disappear. Instead of replacing it with AI, experts will be upgraded to roles where they organize the fleets of artificial intelligence agents and deal with high risk decisions that have been marked to review.
In this context, building a strong human supervision function becomes a competitive advantage, allowing companies to spread artificial intelligence safely and safely than their competitors.
“The companies that master this multiplication will dominate their industries, while those who chase full automation will suffer from reliability, compliance and confidence,” Katz said.
https://venturebeat.com/wp-content/uploads/2025/06/ChatGPT-Image-Jun-27-2025-10_10_45-PM.png?w=1024?w=1200&strip=all
Source link