Researchers at NOMA Safety wrote, “This newly identified vulnerability exploited unheard users, who adopt an agent with a pre-configuored malicious proxy server, uploaded on the ‘Prompt Hub’ (which is against Langchen TOS).” “Once adopted, the malicious proxy stopped all the user communications prudent – including sensitive data, user signs, documents, pictures, and voice inputs such as the API keys (including openiAI API Keys) – without the knowledge of the victim.”
The Langchen team has since warned for agents that have custom proxy configurations, but this vulnerability highlights how serious security consequences may be in well -in -in -in -in -in -in -in -of -law if users do not pay attention, especially on platforms where they copy and drive other people’s codes on their systems.
The problem as mentioning the Fox of Sontype is that, with AI, the risk spreads beyond the traditional executable code. Developers can understand more easily why running software components from repository such as Pypi, NPM, NUGET, and Maven Central takes significant risk on their machines if those components are not weightted first by their security teams. But they cannot think that the same risk applies when testing the system prompt in LLM or testing a custom machine learning (ML) model shared by others.