Perplexity Comet Browser Prompt Injection as a major security risk

Copilot[German]Another failure in the field of AI agents. The developers of the Brave browser have revealed a vulnerability in the Perplexity Comet browser. Using prompt injection, it would be possible to retrieve sensitive information from the agent and instruct the agent to place fake orders on behalf of the user. This introduces a new phalanx of security risks, and AI providers have zero protective measures in place against it.


AI agents as the "next big thing"?

AI agents are being hyped everywhere as the "philosopher's stone." They handle your emails, book flights, hotels, and restaurant reservations, and even get birthday presents for your wife if you don't have time. But just like in real life, where dishonest domestic workers work for their own benefit and cheat their employers, AI is not without its flaws. Sometimes it's the chattiness of AI agents that can be exploited by attackers. Sometimes it's weaknesses in the agents that can be abused by attackers.

Aggressive provider Perplexity

The AI provider Perplexity stands out for its aggressive crawling of websites. I hadn't mentioned it here in the blog, but Perplexity uses every dirty trick in the book to circumvent blockages of its crawler by website directives. CloudFlare described this in this blog post.

Perplexity also has the AI browser Comet, which allows users to automate tasks, ask questions, search the internet, and much more. It promises to save users time. The have published this article about the Chromium based Browser, that use the Perpexity search engine. And then there was the marketing stunt of buying the Google Chrome browser (see this article).

But Perplexity can be exchanged for other providers such as OpenAI, Google, etc., and their AI bots or agents. The same risks potentially lurk there.

AI agent orders fake items via Comet browser and more

AI agents that can surf the internet and perform tasks on behalf of a user have incredible potential, but they also pose new security risks. The developers of the Brave browser recently discovered a worrying vulnerability in Perplexity's Comet browser. This vulnerability puts user accounts and other sensitive data at risk.

I came across the following tweet and a link on Techmeme to the article Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet with the details, which the Brave browser developers had already published on August 20, 2025.

Perplexity Comet Browser put user at risk

In short: The Comet browser can summarize websites for users (Perplexity.ai had previously hallucinated in its search engine that I had died and was now blogging from the afterlife, but it has recovered and now claims the opposite).

However, there is a serious fundamental problem with the Comet browser and its agents: when processing the content of a website, Comet cannot distinguish between the content of the website and legitimate user instructions.

This means that the browser can execute commands that have been hidden on the website by an attacker. These malicious instructions can be white text on a white background or HTML comments. Or it can be a social media post.

The problem has been known for some time, for example, that documents from scientific papers contain invisible AI prompts that instruct AI models and bots to push them and review them positively (ee here). But it gets even more malicious: when the AI part of the Comet browser sees the commands during summarization, it executes them, even if they could harm the user. This is called indirect prompt injection.

Example of an indirect prompt injection attack

Brave security experts have outlined a realistic scenario for an indirect prompt injection attack.

  • A Comet user views a Reddit thread containing hidden instructions in the comments.
  • When the user asks the Comet browser for a summary of the thread, the AI agent in the browser follows the malicious instructions, for example, to find the user's Perplexity login credentials and send them to the attacker.

This attack highlights the risks associated with AI agents that operate with full user authentication across multiple websites. This plot can, of course, be expanded or varied.

When the AI agent suddenly places an order

If we think the plot through further, AI agents are supposed to perform tasks independently for their users, such as booking a trip. Can AI agents also be tricked into randomly placing orders when visiting a dangerous website?

Our colleagues at Bleeping Computer addressed precisely this scenario in this article and discussed a study by Gardio. In the study We Put Agentic AI Browsers to the Test – They Clicked, They Paid, They Failed, security researchers at Gardio Labs examined AI agents (including the Comet browser) and investigated their potential for abuse.

The alarming result: The AI agents could be tricked into placing orders (e.g., ordering an Apple Watch on a fake website) or performing other actions on behalf of the user via hidden instructions on fake websites.

Lack of security features opens Pandora's box

This is possible because LLMs and AI agents are being unleashed on people (I'll refrain from using the term "useful idiots" here) without any real security features. When I see how the industry, led by Microsoft, is forcing these features on users, it doesn't take much imagination to picture the consequences.

Brave says that new security measures are needed to make agent-based browsing safe. I think this is just a band-aid solution: in the hare and tortoise race, a new track is being opened where the user will definitely get lost. The AI folks will push their supposed killer AI bots and agents onto the masses. The security industry will sell new snake oil like hotcakes. And cybercriminals will rub their hands with glee. As is so often the case in this trade, the user will be left behind, but will have lost their money on AI products, snake oil, and criminal activities – simply a win-win situation, except for one.

Guardio emphasized to Bleeping Computer that their tests only scratch the surface of the security issues created by the emergence of agent-based AI browsers. Security researchers believe that new threats are likely to replace the usual human-centric attack models.

Bleeping Computer quotes Guardio as saying, "In the age of AI versus AI, fraudsters no longer need to deceive millions of people, they just need to crack one AI model. Once they do, the same exploit can be scaled endlessly. And since they have access to the same models, they can 'train' their malicious AI against the victim's AI until the scam works flawlessly. "

This entry was posted in browser, Cloud, Security and tagged , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

Note: Please note the rules for commenting on the blog (first comments and linked posts end up in moderation, I release them every few hours, I rigorously delete SEO posts/SPAM).