
“This makes an ideal storm for cyber criminal,” J. Stephen Kovsky said, Field CTO in Slashnxt. “When the AI models hate the URL to the unregistered domain, the attackers can simply register those accurate domains and wait for the victims to arrive.” He prefers it to give the attackers a roadmap to future victims. “A single malicious link recommended can compromise thousands of people who will normally be more alert.”
The findings of Natakraft Research are especially in the form of national brands, mainly in Finance and Fintech, among the most difficult hits. Credit unions, regional banks and medium -sized platforms performed poorly compared to global veterans. Small brands, which are less likely to appear in LLM training data, were extremely hallucinations.
“LLMS does not get information, they generate it,” Nicole Kariganon said, Siso in Darktress. “And when the users consider those outputs as a fact, it opens the door for mass exploitation.” He pointed to a built -in structural defects: the model is designed to be helpful, not accurate, and unless AI reactions are baseless in valid data, they will continue to invent the URL, often with dangerous results.
Researchers reported that registering all the hallucinations domains in advance, seems to have a viable solution, will not work because the variations are infinite and LLMs are always new inventions, leading to slopsquatting attacks.

