However, most experts agree that AI agents are self-contained code modules that can directly direct tasks. Cyber security researcher Andres Ryancho, in WIZ, tells CSO, “The basic concept is that you have an LLM that can decide to do a task, which is then being executed through the most likely MCP,” or Model reference protocol The server, which serves as a bridge between AI models and various external devices and services.
Ben Seri, co-founder and CTO of Zafaran Security, draw a parallel between the rise of AI agents and the rise of tribal AI. “These are devices that will enable this LLM to act like an analyst like an intermediary, such as that nature,” he describes the CSO. “It is not a way different in a way that it is different in a way where it started, where it is a machine, you can give it a question, and it can give you an answer, but the difference is now a process. It’s when you are taking AI and LLM and you are giving it the ability to take some action on the agency or yourself.”
Faith, transparency, and slowly moving forward
Like all technologies, and perhaps more and more dramatically, agentic AIs both risk and profit. A clear risk of AI agents is that, like most LLM models, they will hail or errors that can cause problems.