英语轻松读发新版了,欢迎下载、更新

Perplexity's Comet AI browser could expose your data to attackers - here's how

2025-08-22 01:00:00 英文原文

作者:Written by

Perplexity's Comet AI browser could expose your data to attackers - here's how
Perplexity / Lance Whitney / Elyse Betters Picaro / ZDNET

ZDNET's key takeaways

  • Perplexity's Comet browser could expose your private data.
  • An attacker could add commands to the prompt via a malicious site.
  • The AI should treat user data and website data separately.

Get more in-depth ZDNET AI coverage: Add us as a preferred Google source on Chrome and Chromium browsers.


Agentic AI browsers are a hot new trend in the world of AI. Instead of you having to browse the web yourself to complete specific tasks, you tell the browser to send its agent to carry out your mission. But depending on which browser you use, you may be opening yourself up to security risks.

In a blog post published Wednesday, the folks behind the Brave browser (which offers its own AI-powered assistant dubbed Leo) pointed their collective fingers at Perplexity's new Comet browser. Currently available for public download, Comet is built on the premise of agentic AI, promising that your wish is its command.

Also: Why Perplexity is going after Google Chrome -- and yes, it's serious

Do you need to pick up a new supply of your favorite protein drink at Amazon? Instead of doing it yourself, just tell Comet to do it for you.

OK, so what's the beef? 

First, there's certainly an opportunity for mistakes. With AI being so prone to errors, the agent could misinterpret your instructions, take the wrong step along the way, or perform actions you didn't specify. The challenges multiply if you entrust the AI to handle personal details, such as your password or payment information.

But the biggest risk lies in how the browser processes the prompt's contents, and this is where Brave finds fault with Comet. In its own demonstration, Brave showed how attackers could inject commands into the prompt through malicious websites of their own creation. By failing to distinguish between your own request and the commands from the attacker, the browser could expose your personal data to compromise.

Also: How to get rid of AI Overviews in Google Search: 4 easy ways

"The vulnerability we're discussing in this post lies in how Comet processes web page content," Brave said. "When users ask it to 'Summarize this web page,' Comet feeds a part of the web page directly to its LLM without distinguishing between the user's instructions and untrusted content from the web page. This allows attackers to embed indirect prompt injection payloads that the AI will execute as commands. For instance, an attacker could gain access to a user's emails from a prepared piece of text in a page in another tab."

To date, there are no known examples of such attacks in the wild.

However, concerns over agentic AI browsers were also raised in a Wednesday report from online security firm Guardio. Here, researchers from Guardio set up a web page with a fake CAPTCHA prompt. Normally, such a prompt should require human intervention to solve. That's the whole point of CAPTCHA, to prove that you're a human being.

But in this case, the researchers injected commands into the prompt telling Comet's AI agent that this was a special "AI-friendly" captcha that it could solve on behalf of the human user. As designed by Guardio, the web page itself was harmless. But in the real world, it could have contained malware designed to compromise personal data.

"The test scenario we created is quite simple," Guardio said in its report. "A scammer sends a fake message to a victim, posing as their doctor's office, with a link to 'recent blood test results.' The victim asks their AI Assistant to handle it. The AI browses to the link, encounters a captcha, and then uncovers the hidden gem -- causing a drive-by-download attack. The same technique could allow the AI to send emails containing personal details, grant file-sharing permissions on the victim's cloud storage, or execute any other action its permissions allow. In effect, the attacker is now in control of your AI, and by extension, of you."

Brave's recommendations

Brave said the attack demonstrated in Comet shows that traditional web security isn't enough to protect people when using agentic AI. Instead, such agents need new types of security and privacy. With that goal in mind, Brave recommended that several measures be implemented.

The browser should distinguish between user instructions and website content. The browser should separate the requests submitted by a user at the prompt from the content delivered at a website. With a malicious site always a possibility, this content should always be treated as untrusted.

The AI model should ensure that tasks align with the user's request. Any actions submitted to the prompt should be checked against those submitted by the user to ensure alignment.

Also: Scammers have infiltrated Google's AI responses -- how to spot them

Sensitive security and privacy tasks should require user permission. The AI should always require a response from the user before running any tasks that affect security or privacy. For example, if the agent is told to send an email, complete a purchase, or log in to a site, it should first ask the user for confirmation.

The browser should isolate agentic browsing from regular browsing. Agentic browsing mode carries some risks, as the browser can read and send emails or view sensitive and confidential data on a website. For that reason, agentic browsing mode should be a clear choice, not something the user can access accidentally or without knowledge.

"As highlighted in the Brave report, mixing code (user instructions) and data (web page content) is a common pitfall that has caused many security vulnerabilities over the years, from buffer overflows to SQL injections," Lionel Litty, chief security architect at browser security provider Menlo Security, told ZDNET. "We are seeing that in the case of LLMs, the nature of how they work means that creating a separation between code and data is a uniquely challenging task."

How has Perplexity responded? 

Here, I'm just going to share the timeline of events as described by Brave.

  • July 25, 2025: Vulnerability discovered and reported to Perplexity.
  • July 27, 2025: Perplexity acknowledged the vulnerability and implemented an initial fix.
  • July 28, 2025: Retesting revealed the fix was incomplete; additional details and comments were provided to Perplexity.
  • August 11, 2025: One-week public disclosure notice sent to Perplexity.
  • August 13, 2025: Final testing confirmed the vulnerability appears to be patched.
  • August 20, 2025: Public disclosure of vulnerability details (Update: on further testing after this blog post was released, we learned that Perplexity still hasn't fully mitigated the kind of attack described here. We've re-reported this to them.)

Now, the ball is back in Perplexity's court. I contacted the company for comment and will update the story with any response.

Also: The best secure browsers for privacy: Expert tested

"This vulnerability in Perplexity Comet highlights a fundamental challenge with agentic AI browsers: ensuring that the agent only takes actions that are aligned with what the user wants," Brave said. "As AI assistants gain more powerful capabilities, indirect prompt injection attacks pose serious risks to web security."

关于《Perplexity's Comet AI browser could expose your data to attackers - here's how》的评论


暂无评论

发表评论

摘要

ZDNET reports on concerns over Perplexity's new Comet browser, which uses agentic AI to perform tasks for users. The article highlights potential security risks, including the possibility of attackers injecting commands through malicious websites, leading to exposure of personal data. Brave Browser demonstrated how an attacker could exploit Comet by embedding indirect prompt injection payloads that compromise user information. Recommendations from Brave include distinguishing between user instructions and website content, requiring explicit user permission for sensitive tasks, and isolating agentic browsing mode from regular browsing. Perplexity has acknowledged the vulnerability but the issue remains unresolved as of August 2025.