The identification of the machine pose a major safety risk for enterprises, and this risk will be dramatically increased as AI agents are deployed. according to a Report By cyber security seller Cyberc, a machine identification-also known as non-human identity (NHI)-is now ahead of humans from 82 to 1, and their numbers are expected to increase rapidly. By comparing 2022The identification of the machine defeated humans from 45 to 1.
“If you see the IAM (identification and access management) as a whole, the machine identification is the most immature place,” Gartner analysts Steve Vessels. “It’s very difficult to catch. And then we talk about AI. Things are moving so fast. People are doing it for villi-blue. They are throwing AI agents everywhere.”
Traditional security risk
Managing the identity of the machine before AI agents was already a problem, but businesses discovered ways to bypass, including automation scripts, which goes every 90 days to replace a certificate or password or account. As a result, self-insignificant certificates, certificates can be eliminated without appropriate renewal procedures, potential security risk from hard-coded credentials and service accounts.
There are three main issues when it comes to NHI: visibility of these identities, long and untreated NHI and default and hard-coded credentials
Visibility
The Yageo Group had a lot of problematic machine identification that the Information Safety Operations Manager Terrick Taylor says that he is almost embarrassed to say that even though the group has now automated the monitoring of both human and non-human identity and identity is a process for the management of the life cycle. “The last time I saw the portal, I used to eat more than 500,” they say.
But once he can see the problem – a default password, for example, or an account that was very allowed, or more than 90 days old – he could take steps to close it or take other measures. This issue can increase considerably if it is a company that often receives others with various techniques.
According to the cyberc survey-more than 2,600 security decision-making-70% respondents in -20 countries say that the identity is a root cause of cyber security risk, and 49% say they lack full visibility in the cloud environment and perfect visibility in the permissions.
It is complex that for a crowd of various reasons, the machine can be identified by various individuals and systems within an organization. Some of these identities are made by employees who then leave the company, because they go, take knowledge of their existence with them. But access rights remain.
Even more worrying is that a single compromise account with high privilements can be used to create more service accounts by an attacker, helping them to spread forward and depth within an organization and makes them very difficult to exclude.
Long-lost non-human identity
Life cycle management is important to secure machine identity. In addition to the operational challenges of the expired certificate, there is also a risk that a credentials have been rotating for a long time, the more obstacles that someone has stumbled on it. Gartner’s vessels say, “The most difficult task with a service account is why it was made and is being used.” “When you spin it, you know what it really is, but if you do not really document well and maintain that documentation, it quickly gets unrealaned.”
Companies end up with service accounts everywhere, which creates a large attack surface, which only grows over time. “We have seen passwords that were set for nine years and not changed.” “That password gets a type of embedded, and it is very difficult to rotate it, change it, secure it.”
Many companies do not have life cycle management for all their machine identification and the security team can be reluctant to close old accounts as doing so can lead to important business procedures.
Yijo’s Taylor is not one of them. “If I see anything older than 90 days, I am killing it.
Others may have to join him soon. In April, the Certificate Authority Browser Forum unanimously voted for the TLS Certificate Lifestyle Lifestyle from the current 398 days to 200 days by next March, 100 days till March 2027, and just 47 days till March 2029. “This is going to be a lot of fundamental problem for us, because due to the vice president of ITS and CIS.” We have a very strong process, but still there are days when we come and fall through a certified renewal crack. “
A low lifetime reduces the opportunity for the keys to compromise through human-in–media attacks and data violations and companies are encouraged to embrace automation.
Default and Hard-Coded Crearents
When an application is made for the first time, it is easy to use passwords which are only the “password” words as a placeholders. Access-management systems that provide credit for once-utilization, when they need to be necessary, are used. And some systems come with default logins such as “administrators” that are never changed.
George says that there are many such mistakes that companies make all the time. “An attacker must not really be sophisticated to enter.” When you leave home, it is like leaving your key in the lock. At that time, is it also counted as a break-in if it enters the criminal? “You let them go in.”
Similarly, when developers are correct in hard-codes and other access credentials software, and the code is leaked, they are cooked for credentials harvesting.
According to Verizon’s 2025 data breech investigation ReportThe public guit repo had about half a million exposed credentials, which is referred to by Verizon as mysteries. And the average time taken to discover the leaked secrets was 94 days. It is three months in which an attacker can get this information and exploit it.
And they did it. According to the report, the credential abuse single was the most common access vector, analyzed in 22% out of about 10,000 violations, putting it ahead of both exploitation of weaknesses and fishing, although Verizon did not distinguish between human and machine identification in her report.
As the attackers deploy more AI and automation, all traditional risk of machine identification becomes more intense. AI-operated bots can crawls to find unsafe machine identity through leak data and source code repository and to avail them even more access.
Generative AI and AI agents NHI increase risk
According to the Cyberk survey, AI is expected to be the top source of new identity with privileged and sensitive access in 2025. It is no surprise that 82% of companies say that their use of AIs causes access risk. It is so easy to deploy many generative AI technologies that business users can do it without input, and without security inspection. About half of all organizations, 47%, say they are not able to secure and manage shadow AI.
AI agent is the next step in the development of generic AI. Unlike chatbots, which work with company data only when provided by user or a prompt signal, agents are usually more autonomous, and can go out and get the necessary information on their own. This means that they require access to the enterprise system, which will allow them to complete all their prescribed tasks. Yijo’s Taylor says, “What I am earlier worried about is wrong.” If the permissions of the AI ​​agent are set incorrectly, “it opens the door for a lot of bad things.”
AI agents can demonstrate unexpected and emerging behaviors due to their plan, cause, working and learning ability. An AI agent who has been instructed to meet a particular goal can find a way to do it in an unexpected manner, and with unexpected results.
This risk is further extended, with agent AI systems that use several AI agents working together to complete large tasks, or even automate the entire business processes. In addition to individual agents, the agent AI system may include access to data and tools, as well as safety and risk guardrils.
“Code in old scripts is stable and you can see the behavior, see the code, and you know it should be connected,” says Taylor. “In AI, the code changes itself … the agentic AI is cutting the edge. And sometimes you step on that edge, and it can cut it.”
This is not a purely theoretical threat. In May, anthropic Issued Results of safety testing on its latest cloud model. In a test, Claude was allowed to access the company’s email, so that it could act as a useful assistant. In reading the email, Claude informed about its own adjacent replacement with a new AI system, and also that the engineer in charge of this replacement was an affair. In 84% of trials, Claude tried to blackmail the engineer so that it could not be replaced. Anthropic stated that it kept the railing to prevent this kind of thing, but it has not released any test results on those railings.
This should raise important concerns for any company to give AI direct access to the email system.
Unexpected behavior is just the beginning. According to CSA, another challenge with agents is the unbearable nature of their communication. Traditional applications communicate through highly approximate, well -defined channels and formats. AI agents can communicate with other agents and systems using plain language, making it difficult to monitor with traditional safety techniques.
How can cyber security leaders secure machine identity
The first step is to get visibility in all machine identities in an environment and make policies to manage them.
Gartner’s vessels recommended that the enterprise machine moves towards centralized governance for identification and attach credentials for specific charge. “Then manage the life cycle of that application or workload. The way to do this is a very modern way.”
Credit can last for five minutes, or even less. “Just for that time they need that connection. Then it goes away.”
There are plenty of guidance for companies that are looking to modernize their identity management, and many established vendors in space. And technology continues as the use of AI develops more.
According to the cyberc survey, 94% of the respondents are already using AI and LLM in their identity security strategies. For example, 61% are considering using AI to secure both human and machine identity over the next 12 months.
Unfortunately, when it comes to acquiring the identity of AI agents, things do not look as Rosie. Vessals say, “There are not many standard around the agentic AI and is being poured by someone else and everyone.” “There is also no complete structure to handle these things.”
Companies also need to monitor what AI agents are doing, what connections they are making, and what information they are drawing, they say.
Anand Rao, AI Professor at the University of Carnegie Melan, suggests that some enterprises want to first wait and secure the infrastructure of their heritage, and only after deploying AI agents, they have modernized their machine identity environment.
It all depends on their risk tolerance. And there are some framework that companies can see. The Sans Institute released a set in March AI Safety GuidelinesWhich includes recommendations such as entrepreneurs limited to AI agents and limiting equipment, and ensuring that the agent is the least privilege possible.
CSA released in its agent in May Ay red teaming guideWhich underlines in many ways that have risks with AI agents that are different from traditional applications, and if agents are being misbehaved then provide practical recommendations to spot.
After this read it:
- Non-Human Identification Understanding Top 10 List of Owasp of significant risks
- 3 major strategies to reduce non-human identity risks
- Nhis can be your greatest – and the most neglected – safety holes
- Firm struggling with non-individual identity in cloud
- 7 Machine Identification Management Best Practice
,
,