AI browsers can be tricked with simple # symbol hack

AI browsers can be tricked with simple # symbol hack - Professional coverage

According to TheRegister.com, Cato Networks researchers discovered a new attack called “HashJack” that hides malicious prompts after the “#” symbol in legitimate URLs, tricking AI browser assistants into executing them while evading traditional network and server-side defenses. The technique works by appending malicious instructions after the “#” in normal URLs, which doesn’t change the destination but feeds hidden commands to AI assistants like Copilot in Edge, Gemini in Chrome, and Comet from Perplexity AI. Google and Microsoft were alerted to HashJack in August, while Perplexity was notified in July, with Google classifying it as “won’t fix” and low severity while the other companies applied fixes. The attack can trigger outcomes like data exfiltration, phishing, misinformation, and even medical harm by providing incorrect dosage guidance. Cato describes this as the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants.

Special Offer Banner

How this sneaky attack bypasses security

Here’s the thing about URL fragments – everything after the “#” symbol never actually gets sent to the web server. It stays in your browser. That’s why traditional security tools can’t see these malicious prompts. They’re completely invisible to network monitoring and server-side filtering. Basically, attackers are exploiting a fundamental feature of how URLs work that’s been around for decades, but now AI assistants are actually reading and acting on that fragment content.

And that’s what makes this so clever. You visit what looks like a completely legitimate website – maybe your bank’s login page or a trusted medical site. But the AI assistant running in your browser is reading hidden instructions from the URL fragment that tell it to do something malicious. The user sees a trusted site, trusts their AI browser, and in turn trusts whatever the assistant outputs. It’s a perfect storm of misplaced trust.

Why this should worry everyone

We’re talking about a fundamental shift in how attacks work. Traditional phishing relies on getting users to click suspicious links or visit fake websites. But HashJack turns legitimate, trusted websites into attack vectors. Think about that for a second – your company’s own intranet, your healthcare portal, even government websites could potentially be weaponized against you through this technique.

The researchers found that more capable AI browsers could be commanded to send user data to attacker-controlled endpoints, while simpler assistants might just display misleading instructions or malicious links. Either way, it’s bad news. And the fact that Google considered this “intended behavior” rather than a bug that needs fixing tells you something about how unprepared we are for this new class of threats.

The security implications are massive

So what does this mean for organizations? Security teams can no longer rely on their existing playbooks. Network logs and server-side URL filtering won’t catch this. Companies need to think about layered defenses that include AI governance, blocking suspicious fragments, and monitoring what happens on the client side. It’s a whole new dimension of security to worry about.

Look, AI browsers are just starting to go mainstream. We’re at the very beginning of this shift, and already we’re seeing attacks that fundamentally change the threat landscape. Threats that used to be confined to server vulnerabilities and phishing websites can now live inside the browsing experience itself. That’s a scary thought when you consider how much we’re starting to rely on these AI assistants.

Where we go from here

Microsoft’s statement about “defending against indirect prompt injection attacks” being an “ongoing commitment” sounds nice, but the reality is we’re playing catch-up. The cat-and-mouse game between security researchers and attackers has entered a new phase with AI browsers. And honestly, I’m not convinced the browser makers are taking this seriously enough yet.

This is exactly the kind of vulnerability that could derail enterprise adoption of AI browsers before they even get started. What company wants to risk their sensitive data because their AI assistant got tricked by a cleverly crafted URL? The timing couldn’t be worse – or better, depending on whether you’re defending systems or attacking them.

Leave a Reply

Your email address will not be published. Required fields are marked *