On Thursday, Box Conference Conference launched by announcing a new set of artificial intelligence features and AI Agentic models in the spine of the company’s products.
It is the ads of a more product than usual at the conference, which reflects the rapid pace for developing artificial intelligence in In FebruaryAnd others for deep research and research in May.
Now the company introduces a new system called Fund automation This works as a kind of operating system for artificial intelligence agents, which leads to the division of the workflow into different slices that can be increased with artificial intelligence as necessary.
I spoke with CEO Aaron Levy about the company’s approach to artificial intelligence, and the risky work in competing with the model companies. It is not surprising that he was very optimistic about the capabilities of artificial intelligence agents in the modern workplace, but it was also clear about the restrictions of current models and how to manage these restrictions with current technology.
This interview was released for length and clarity.
You announce a set of artificial intelligence products today, so I want to start asking about seeing the big picture. Why build artificial intelligence agents in the service of managing cloud content?
So the thing we think throughout the day – and what is our focus in the box – is the amount of work that changes due to artificial intelligence. The vast majority of influence at the present time is the workflow that involves unorganized data. We have already been able to automate anything dealing with the data organized in a database. If you are considering CRM systems, ERP systems and human resources systems, we have already had years of automation in this field. But where we had no automation before, anything that touches the non -structured data.
TECHRUNCH event
San Francisco
|
27-29 October, 2025
Think about any kind of legal review operations, any kind of marketing asset management process, any kind of review of integration and purchase deals – all of this workflow deals with a lot of non -structured data. People have to review these data, make updates, make decisions, etc. We were never able to bring a lot of automation to this workflow. We were able to describe them in programs, but computers were not good enough to read a document or look at a marketing origin.
So for us, artificial intelligence agents mean that for the first time ever, we can already take advantage of all these undistical data.
What about the risks of publishing agents in the context of work? Some of your customers should be tense about publishing something like this on sensitive data.
What we have seen from customers is that they want to know that every time they operate this workflow, the agent will implement more or less in the same way, at the same stage of the workflow, and there are no things that come out of the bars. Do not want to have an agent who makes some compound error where, after they have made 100 widths for 100 couple, they start running.
It becomes really important to obtain the correct demarcation points, as the agent begins and the other parts of the system end. For each workflow, there is this question about what needs to have inevitable handrails, and what can be a fully and unlimited agent.
What you can do with Box automation is to determine the how much work you want each individual agent to do before he is ashamed to a different agent. Therefore, you may have a separate application agent from the audit agent, and so on. It allows you to mainly publish artificial intelligence agents in any type of workflow or business in the institution.

What kind of problems do you guard by dividing the workflow?
We have already seen some restrictions even in the most advanced systems of the agent such as Claude Code. At some point in the task, the model is running out of the windows room to continue making good decisions. There is no free lunch now in artificial intelligence. Not only can you get a long -term agent with an unlimited context window that follows any task in your work. So you have to dismantle the workflow and use SBAGENTS.
I think we are in the era of context within artificial intelligence. What artificial intelligence models and agencies need is the context, and the context they need to work is to sit inside your unorganized data. So our entire system is truly designed to know the context that you can give the artificial intelligence agent to ensure its performance effectively as possible.
There is a greater debate in the industry on the benefits of large and strong border models compared to smaller and reliable models. Is this your position on the side of the smaller models?
Perhaps I should explain: Nothing about our system prevents the task from being long or arbitrarily complex. What we are trying to do is to create the right handrails until you get to determine the extent of your desire to be this task.
We do not have a specific philosophy regarding the place where people should be in this continuity. We are only trying to design a resistance structure in the future. We are designed in a way that the models improve, with the improvement of agents improve, you will get all these benefits directly on our platform.
Another anxiety is data control. Since the models are trained on a lot of data, there is a real fear that sensitive data will get a renewal or abuse. How is this factor in?
It is where many artificial intelligence spread. People think, “Hey, this is easy. I will give AI model to access all of my non -structured data, and you will answer questions for people.” Then he begins to give answers to data that you cannot access or should not be able to access. You need a very powerful layer that deals with access controls, data security, permissions, data governance, compliance, and everything.
Therefore, we benefit from a couple contracts that we spent in building a system that mainly deals with this accurate problem: How can you just make sure that the right person can only access every part of the data in the institution? So when the agent answers a question, you definitely know that he cannot draw any data that this person should not be able to reach. This is just something mainly integrated into our system.
Earlier this week, Anthropor released a new feature to download files directly to Claud.ai. It is far from the type of file management that Box does, but you should consider potential competition from the foundation model companies. How do you approach that strategic?
So if you are considering what institutions need when artificial intelligence is widely spread, they need safety, permissions and control. They need the user interface, and they need strong application programming facades, and they want to choose them for artificial intelligence models, because one day, the AI model works on some cases that it uses better than the other, but then it may change, and they do not want to be locked on a specific platform.
So what we built is a system that allows you to get all these capabilities effectively. We are storing, safety and permissions and including vector, and we communicate with each distinguished Amnesty International model there.
https://techcrunch.com/wp-content/uploads/2024/08/GettyImages-1178603809.jpg?w=1024
Source link