
Follow ZDNET: Add us as a favorite source On Google.
Key takeaways of zdnet
- McKinse created and saw 50+ agentic AI for a year.
- Digital employees require a lot of work to gain momentum.
- The AI agent is not the best answer to all business needs.
According to many accounts, AI agents are considered digital colleagues in today’s workforce. So, with human workers, they should be subject to an annual performance review, right?
The people of McKinsey did just that, issuing the results of one One year performance review Among the AI agents that the consulting firm was implemented and observed. How did these digital employees do on their job in their first year? Conclusions of McInsey Team: They require a lot of work to gain momentum; They are not always the best answer to the need for every business; And their human counterparts are not always affected by the work of agents.
Also: Microsoft will compete to offer AI apps and agents market with AWS
The progress report written by all Larina Yi, Michael Chui, and Roger Roberts along with McKinse reviewed at least 50 agent AI, which was led by the authors with others in McCins’. After a year with AI agents, they arrived at six lessons.
1. Agents perform better within workflows
Applying AI agents for AI agents, it will not bite, Ji and their colleagues advised. This is more about injecting agents to promote workflow.
According to the review, “Agent AI’s efforts that fundamentally focus on re -starting the entire workflows – that is, which include steps people, processes and technology – more likely to give a positive result.” Start with the main user addressing pain points, co-writer suggests. Organizations with insurance companies or legal firms such as document-intensive workflows, for example, agents benefit from handling tired steps.
2. Agents do not always answer
“To help avoid wasted investment or unwanted complexity, they prefer the role of agents such as they evaluate people for a high-performing team,” Yi and their co-writers advise. “The important question to ask is, ‘What is to be done and what are the relative talents of each possible team to achieve those goals?” ,
Also: AI FOMO found? 3 Bold but realistic bets your business can try today
If the agent AI is too much for a problem or if the problem calls for standardized, repeated approach with low variability, simple options such as rules-based automation, future-staging analysis, or stick with large language models (LLM) signal.
3. Aye ‘Slope’ has been a recurring issue
One of the most common issues viewed by McKinse team is “agent system that seems impressive in the demo, but disappoint users who are actually responsible for the work”-“with AI Slop or low-quality output.” As a result, users lose confidence in agents and stop using them.
“Companies should invest heavily in agents development, such as employees do for development,” co-writer advises. With human employees, “agents should be given clear job details, onboard, and continuous response so that they become more effective and improve regularly.”
4. A large number of agents are difficult to track
“When working with only a few AI agents, reviewing their work and spotting errors can be mostly straightforward,” Yi and his team said. “But as companies roll hundreds, or even thousands of agents, this task becomes challenging. When there is a mistake – and there will always be mistakes in the form of companies’ scale agents – it is difficult to detect properly whether it is wrong.”
Apart from this: 6 Insights Service Leaders need to know about Agent AI
The team learned the text by verifying the agent performance at each stage of the workflow, employed the observation equipment. “Construction of monitoring and assessment in workflow may be able to catch mistakes quickly, refine the logic and improve the performance even after the agents are deployed.”
5. Agents show the best value when shared in tasks
Agents can be expensive and meaningless if their designers rebuild the wheel for every upcoming task. “Companies often create a unique agent for each identified work,” McKinse team said. “It can give birth to significant excess and waste because the same agent can often complete various tasks that share many similar tasks – such as ingesting, extracting, searching and analyzing.”
Also: How can AI agents generate $ 450 billion by 2028 – and what is on the way
The investment in reusable agents first calls to identify recurring functions, he advised. “Develop agents and agent components that can easily be reused in various workflows, and make them simple to access for developers.”
6. Agents will never work fully on their own
Always “model will need to take care of the model accuracy, ensure compliance, use decisions and handle edge matters,” co-writers insisted. Redisine work “so that people and agents can cooperate well together. Without that focus, even the most advanced agent programs risk silent failures, compounding errors and user rejection.”
As a result, next year’s agent performance assessment may also be lower than the stellar.

