Traditional safety equipment struggles to struggle as they continuously participate in the dangers introduced by LLM and agent AI systems that were not designed to prevent heritage rescue. From early injections to model extraction, the surface of the attack to AI applications is specificly strange.
Traditional safety equipment like WAFS and API Gateway are mainly insufficient to protect generative AI systems, as they are not pointing with AI interactions, reading and not knowing them and do not know how to explain them, “Unmarried Litan said, said VP Analyst, Gartner.
AI hazards can be zero-day
The AI systems and applications, while automating commercial workflows and detecting the danger and reaction are highly capable of automatic, bring their problems into the mixture, problems that were not before. SQL injections or cross-site scripting adventures have developed safety threats to manipulations in behavior, where adversities have tricked the model to leak data, bypass filters or acting in unexpected ways.
Gartner’s Linton said that while AI is threatened like an extract of model for many years, some are very new and difficult. “Nation states and contestants who do not play with the rules are reverse-engineering state-of-the-art AI models that others have created for many years.”