
Key takeaways of zdnet
- Perplexity comet browser can highlight your personal data.
- An attacker can add command to the prompt through a malicious website.
- AI should treat user data and website data separately.
Get ZDNET AI coverage more deeply: Add us as a favorite google source Chrome and chromium on the browser.
Agent AI Browser AI has a warm new trend in the world. Instead of browsing the web yourself to complete specific tasks, you ask the browser to send your agent to complete your mission. But depending on which browser you use, you can open yourself to the safety risks.
One in Blog post published on WednesdayThe people behind the Brave browser (who offers their own AI-managed assistant dubbed Leo) indicated their collective fingers in the new Comat browser of Perplexity. currently Available for public downloadThe comate agent is made on the basis of AI, promising that your wish is its command.
Also: Why is Google going after Chrome – and yes, it is serious
Do you need to supply a new supply of your favorite protein drinks in Amazon? Instead of doing this yourself, just ask the comate to do for yourself.
Okay, so what is beef? First, there is definitely an opportunity for mistakes. With AI being so prone to errors, the agent may misunderstand your instructions, the way the wrong steps can be taken, or demonstrate the tasks you specified. If you hand over personal details, such as your password or payment to AI to handle the information, the challenges say this.
But the biggest risk lies on how the browser processes the contents of the prompt, and it is where the brave makes a mistake with the comet. In his own performance, Bahadur showed how the attackers can inject the command in order through malicious websites of their own construction. By failing to distinguish between your own requests and the attacker, the browser can highlight your personal data to compromise.
Also: How to get rid of AI overview in Google Search: 4 Easy Ways
“The vulnerability we are discussing This post How the Comet processes the web page content, “Brave said.” When the user calls it ‘a portion of the comate web page directly to present this web page,’ feeds its LLM directly without distinguishing between the user’s instructions and incredible content from the web page. This will allow the attackers to embed the indirect quick injection payload as the AI command. For example, an attacker can get access to the user’s email from the finished piece of text in another tab. ,
To date, there is no known example of such attacks in the wild.
Brave stated that the attack displayed in the comet shows that traditional web security is not sufficient to protect people when using agents AI. Instead, such agents require new types of safety and privacy. Keeping that goal in mind, Bahadur recommended that many measures be implemented.
The browser should differentiate between user instructions and website contentThe browser should separate the requests presented by a user on the prompt from the material given on a website. A malicious site always with a possibility, this material should always be considered incredible.
The AI model must ensure that the work user align with requestsAn inquiry should be investigated against those presented by the user to ensure alignment of any task presented to the prompt.
Also: Scammers infiltrated Google’s AI reactions – how to present them
Sensitive safety and privacy works must require user permissionAI should always need a response from the user before running any task affecting security or privacy. For example, if the agent is asked to send emails, meet shopping or log on to a site, it should first ask the user to confirm.
The browser should separate the agent browsing from regular browsingAgentic browsing mode bears some risks, as the browser can read and send emails or see sensitive and confidential data on a website. For that reason, the agent browsing mode should be a clear option, not something that the user can accidentally access or without knowledge.
How Perplexity has responded to finding defects with comets? Here, I am going to share the timeline of the events described by brave only.
- July 25, 2025: Demonstration discovered and reported depression.
- July 27, 2025: Perplexity accepted vulnerability and implemented an initial fix.
- July 28, 2025: Ritsting revealed that the fix was incomplete; Additional details and comments were provided to perplexity.
- August 11, 2025: A week public disclosure notice sent to Perplexity.
- August 13, 2025: The final test confirmed that the vulnerability appears to be patching.
- August 20, 2025: Public disclosure of vulnerability details (Updated: On further tests after the release of this blog post, we learned that Perplexity has still not completely reduced such an attack. We have reported them again.)
Now, the ball is back in the court of Perplexity. I contacted the company for comment and will update the story with any response.
Also: Best Secure Browser for Privacy: Expert Tested
Brave said, “This vulnerability agent in the perplexity comate highlights a fundamental challenge with AI browsers: ensuring that the agent only takes the tasks that the users want is what is, it is aligned with it,” Brave said. “As AI accessories attain more powerful abilities, indirect early injection attacks pose serious risk to web safety.”