Are AI Browsers Opening New Doors to Cyber Threats?

Are AI Browsers Opening New Doors to Cyber Threats?

I’m thrilled to sit down with Rupert Marais, our in-house security specialist with extensive expertise in endpoint and device security, cybersecurity strategies, and network management. Today, we’re diving into the emerging world of AI-driven browsers, exploring the innovative features they bring to the table and the unique security risks they pose. Our conversation touches on the architectural vulnerabilities in these cutting-edge tools, the specific threats like malicious workflows and prompt injections, and the challenges of safeguarding users in this rapidly evolving landscape.

What sparked the interest at SquareX Labs to investigate the security of AI browsers like Perplexity’s Comet?

Well, Russell, the rise of AI browsers caught our attention because they represent a fundamental shift in how we interact with the web. At SquareX Labs, we noticed a growing trend of browsers integrating AI to automate tasks, and with that came whispers of potential vulnerabilities. There weren’t specific incidents at first, but the sheer autonomy of these AI agents—making decisions on behalf of users—raised red flags for us. We knew we had to dig deeper to understand if these tools were as secure as they needed to be, especially since they handle sensitive data like emails or cloud storage.

How do AI browsers stand apart from traditional ones like Chrome or Firefox in terms of functionality and design?

That’s a great question. AI browsers are built to go beyond just displaying web pages—they’re more like personal assistants. They use natural-language prompts to let users search or perform tasks, like summarizing content or even booking a flight, with minimal clicks. This automation is powered by AI agents that interpret and act on user intent. Unlike traditional browsers, where every action is user-driven, AI browsers introduce a layer of autonomy, which is both their strength and, frankly, their Achilles’ heel when it comes to security.

One of the issues your report highlights is malicious workflows. Can you explain what that means and how it poses a threat?

Absolutely. Malicious workflows refer to scenarios where AI agents in browsers are tricked into performing harmful actions. Think of phishing attacks or OAuth-based scams where an attacker crafts a deceptive prompt or request that looks legitimate. The AI, not being able to fully discern intent like a human might, could grant excessive permissions, exposing things like email accounts or cloud data. It’s a bit like handing over the keys to your house because someone showed up with a convincing fake ID—the AI doesn’t always know better.

Another concern you’ve raised is prompt injection. Can you break down what that is for someone who might not be familiar with the term?

Sure, I’ll keep it simple. Prompt injection is when attackers sneak harmful instructions into places the AI browser trusts, like a document on SharePoint or OneDrive. The AI reads these instructions as if they’re legitimate user commands and might end up sharing sensitive data or embedding malicious links. It’s like slipping a fake note into a trusted friend’s mailbox—the AI doesn’t suspect a thing, but the consequences can be disastrous, from data leaks to spreading malware.

Your research also mentions malicious downloads as a risk with AI browsers. How does this happen, and what makes it so sneaky?

Malicious downloads are a big concern because AI browsers can be manipulated through search results or prompts to download files that look harmless but aren’t. Attackers might rig search outcomes to prioritize a disguised malware file, and since the AI is focused on efficiency, it might pull the file without a second thought. What’s sneaky is that these downloads can bypass a user’s scrutiny—unlike in traditional browsers where you decide to click, the AI might just act, leaving users unaware until it’s too late.

Trusted app misuse was another key issue in your findings. Can you share an example of how a legitimate business tool could be turned against users in this context?

Of course. Imagine a widely used business app, something like a project management tool that integrates with your browser. An attacker could exploit the trust between the AI browser and this app to send unauthorized commands—say, transferring files or accessing restricted data. Because the app is legitimate and already trusted, the AI doesn’t flag the interaction as suspicious. It’s a stark reminder that even the tools we rely on daily can become backdoors if not properly secured against AI-driven exploits.

You’ve noted that current security tools like SASE and EDR struggle to monitor AI browser behavior. What’s behind this limitation?

The core issue is visibility. Tools like SASE and EDR are designed to track human actions or traditional malware patterns, but AI browsers operate on a different level with automated agents. Distinguishing between a user clicking a link and an AI agent doing the same on a prompt is incredibly tough. These tools often lack the context to understand AI-driven actions, so malicious behavior can slip through undetected. It’s like trying to spot a robot in a crowd of people—without the right lens, they all look the same.

Looking ahead, what’s your forecast for the future of AI browser security as these tools become more mainstream?

I think we’re at a pivotal moment, Russell. As AI browsers become more common, the race is on to build security directly into their architecture—things like agentic identity systems to tell human from AI actions, or robust data loss prevention right in the browser. I foresee a lot of collaboration between developers, security vendors, and enterprises to close these gaps. But if we don’t act fast, the risks could scale just as quickly as the adoption. My hope is that within a few years, security will be as native to AI browsers as AI itself, but it’s going to take concerted effort to get there.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later