
Presented by Elastic
As organizations struggle to implement agentic AI solutions, access to proprietary data from all corners will be critical
By now, most organizations have heard of agentic AI, which are systems that “think” by autonomously gathering tools, data, and other sources of information to come up with answers. But here’s the thing: credibility and relevance depend on accurate referencing. In most enterprises, this context is scattered across a variety of unstructured data sources, including documents, emails, business apps, and customer feedback.
As organizations look toward 2026, solving this problem will be key to accelerating agentic AI rollouts around the world, says Ken Exner, chief product officer at Elastic.
"People are beginning to realize that to do agentic AI correctly, you must have relevant data," Exner says. "Relevance is important in the context of agentic AI, because that AI is taking action on your behalf. When people struggle to build AI applications, I can almost guarantee you that the problem is relevant.”
agents everywhere
This conflict may usher in a make-or-break period as organizations struggle to gain a competitive edge or create new capabilities. a deloitte study Prediction By 2026, more than 60% of large enterprises will have deployed agentic AI at scale, representing a huge growth from experimental stages to mainstream implementation. and researcher Gartner forecast By the end of 2026, 40% of all enterprise applications will include task-specific agents, up from less than 5% in 2025. Adding task specialization capabilities evolves AI assistants into context-aware AI agents.
Enter Reference Engineering
The process of delivering relevant context to agents at the right time is known as context engineering. This not only ensures that an agent application has the data it needs to provide accurate, in-depth responses, it helps the large language model (LLM) understand what tools it needs to find and use that data, and how to call those APIs.
While there are now open-source standards like Model Context Protocol (MCP) that allow LLMs to connect and communicate with external data, there are few platforms that let organizations create precise AI agents that use your data and seamlessly combine retrieval, governance, and orchestration in one place.
Elasticsearch has always been a leading platform for the core of context engineering. It recently released a new feature called Agent Builder within Elasticsearch, which simplifies the entire operational lifecycle of agents: development, configuration, execution, customization, and observation.
Agent Builder helps build MCP tools on private data using a variety of technologies, including the Elasticsearch query language, a piped query language for filtering, transforming, and analyzing data, or workflow modeling. Users can then take various tools and combine them with signals and LLM to create an agent.
The Agent Builder provides a configurable, out-of-the-box conversation agent that allows you to chat with the data in the index, and it also gives users the ability to create one from scratch using various tools and prompts on top of private data.
"At Elastic, data is the center of our world. We’re trying to make sure you have the tools you need to put that data to work," Exner explains. "As soon as you open the agent builder, you point it to an index in Elasticsearch, and you can start chatting with any data you connect to it, any data that’s indexed in Elasticsearch – or from outside sources through integrations.
Reference engineering as a discipline
Prompt and reference engineering is becoming a discipline. This isn’t something you need a computer science degree for, but more classes and best practices will emerge as there is an art to it.
"We want to make it very simple to do this," Exner says. "People need to figure out how do you drive automation with AI? This is what will increase productivity. Those who focus on that will get more success."
Additionally, other reference engineering patterns will emerge. The industry has moved from accelerated engineering to recovery-augmented generation, where information is delivered to the LLM in a reference window to MCP solutions that helps the LLM with equipment selection. But it will not stop here.
"Given how fast things are moving, I guarantee that new patterns will emerge very quickly," Exner says. "There will still be context engineering, but they will be new patterns for how to share data with LLMs, how to incorporate it into the right information. And I predict more patterns that will make it possible for an LLM to understand private data on which it has not been trained."
Agent Builder is now available as a technical preview. start with one elastic cloud testingAnd see the documentation for Agent Builder Here,
Sponsored articles are content produced by a company that is either paying for the post or that has a business relationship with VentureBeat, and they are always clearly marked. Contact for more information sales@venturebeat.com,

