On Thursday, the box launched its developer conference boxworks by announcing a new set of AI features, creating an agent AI model in the spinal cord of the company’s products.
These are more product announcements for the conference, reflecting the rapid speed of AI development in the company: The box launched its AI studio last year, followed by a new set of data-expatriate agents in FebruaryAnd for other search and intensive research In May,
Now, the company is rolling a new system which is named Box automatic It acts as a type of operating system for AI agents, breaking the workflow into separate sections that can be essentially enhanced with AI.
I talked about the company’s approach to AI with CEO Aaron Levi and the dangerous work of competition with Foundation Model Companies. Unexpectedly, he was very sharp about possibilities for AI agents in the modern workplace, but he was also clear about the boundaries of current models and how to manage those boundaries with current technology.
This interview is edited for length and clarity.
Techcrunch: You are announcing a group of AI products today, so I want to start by asking about the big-picture vision. Why to manufacture AI agents in cloud material-management service?
Aaron Levi: So what we think about the whole day – and our focus is on the box – how much work is changing due to AI. And currently most of the effect is on the workflows that include unnecessary data. We are already able to automate anything that relates to structured data that goes to a database. If you think about the CRM system, ERP system, HR system, we already have years of automation at that place. But where we have never automated, it is anything that touches unnecessary data.
Techcrunch event
San francisco
,
27-29 October, 2025
Think about any type of legal review process, any type of marketing asset management process, any type of M&A deal review – all those workflows deal with lots of unbearable data. People have to review that data, update it, decide. We have never been able to do much automation for those workflows. We are able to describe them in software, but computers are not enough to read only a document or look at marketing assets.
So for us, AI agents mean, for the first time, we can actually tap in all these unarmed data.
TC: What about the risks of deploying agents in a commercial context? Some of your customers should panic about deploying something like this on sensitive data.
Levi: What we are seeing from customers, they want to know that every time they run that workflow, agent is going to execute more or less in the same way in the same way in workflow, and not like going away from rail. You do not want to make some compounding mistakes to an agent, where they first submission 100 submissions, they start running wildly.
The correct demarcation point becomes really important, where the agent starts and other parts of the system are finished. For each workflow, the question of this question is what is the need for determinable railing, and what can be fully agent and non-absent.
What you can do with the box automat, decide how much you want to do every individual agent, before it shut down a separate agent. So you can have a submission agent that is different from the review agent, and so on. This allows you to basically deploy AI agents on a scale in any kind of workflow or commercial process in the organization.

TC: What kind of problems do you protect by dividing the workflow?
Levi: We have already seen some limitations in the most advanced fully agent system like cloud code. At some point of work, the model exits the reference-window room for making good decisions. There is no free lunch in AI right now. You may not have a long -running agent with unlimited reference windows after any work in your business. So you have to break the workflow and use sub-agents.
I think we are in the era of reference within AI. What is the requirement of AI models and agents, reference, and in the context in which they need to work, it is sitting under your unnecessary data. So our entire system is actually designed to find out in what context you can give to the AI agent to ensure that they perform as effectively as possible.
TC: There is a major debate in the industry about the benefits of the larger, powerful Frontier model than the small and more reliable models. Does it place you on the edge of small models?
Levi: I probably should clarify: anything about our system prevents the task from being arbitrarily being long or complicated. What we are trying to do is making the right railing so that you can decide how you want that task.
We do not have any special philosophy, where people should be on that continuity. We are just trying to design future proof architecture. We have designed it in a way where the model improves and like -such an agent capabilities improve, you just find all those benefits directly in our platform.
TC: Other anxiety is data control. Because the model is trained on so much data, there is a real fear that sensitive data will be re -obtained or misused. How is that factor?
Levi: This is where a lot of AI deployment goes wrong. People think, “Hey, it’s easy. I will use an AI model for all my unarmed data, and it will answer questions for people.” And then it starts answering you on the data that you do not have access or you should not have access. You need a very powerful layer that handles access control, data security, permissions, data governance, compliance, everything.
So we are benefiting from some decades that we have created a system that basically handles the exact problem: How do you ensure that only the right person has access to each piece of data in the enterprise? So when an agent answers a question, you know it determined that it cannot attract any data that the person should not reach. It is just fundamentally manufactured in our system.
TC: Earlier this week, Anthropic released a new feature to upload files directly to Cloud.AI. This is a long way from the type of file management that boxes, but you must be thinking about potential competition from Foundation Model companies. How do you contact strategically?
Levi: So if you think about that when they deploy AI on a scale, they require safety, permissions and control. They require user interfaces, they require powerful APIs, they want their choice of AI models, because one day, an AI model gives them some use powers that are better than the other, but then they may change, and they do not want to be closed in a particular platform.
So what we have created is a system that lets you do all those abilities effectively. We are doing storage, safety, permissions, vector embeding, and we connect with every leading AI model that is there.

