Agentic AI browsing – key privacy and security risks and how to mitigate them?
10th November 2025Agentic browsing turns “users clicking bad links” into “autonomous agents executing malicious instructions at machine speed”. Maybe we can have these browsers watch one of those corporate security training videos that talk about phishing links once a year?
WHAT IS AGENTIC AI?
Agentic AI is defined by its ability to act autonomously with initiative, reasoning, and adaptability, it plans, makes decisions, and takes multi-step actions with minimal human intervention. Unlike traditional AI that only responds to commands or follows fixed rules, agentic AI can set goals, adapt strategies based on context, and execute tasks independently across different systems or applications.
WHAT IS AN AGENTIC AI BROWSER?
Any web browser that that incorporates autonomous agents capable of navigating multiple sites, accessing session cookies, and executing complex workflows could be classified as an agentic AI browser.
Agentic AI browsers act like smart assistants embedded inside your Internet browser. Instead of waiting, they actively analyse web pages, interpret content, and perform actions, saving you time and effort. An everyday example would be when booking a dinner reservation. Instead of manually searching and clicking through multiple restaurant sites, the augmented AI browser analyses options for you, checks availability, compares times, fills out forms, and even completes the booking – all based on your single instruction like “Book me a table for Friday night.”
WHAT ARE THE KEY SECURITY CONCERNS WITH AGENTIC AI BROWSERS?
Several credible researchers and vendors have recently documented serious vulnerabilities and privacy concerns with agentic (AI-driven) browsers, often including platforms like Perplexity Comet, Fellou, and similar “autonomous” web assistants that companies are starting to use more frequently.
The key risks fall into a handful of categories, each documented by published research or vendor advisories referenced below.
Prompt Injection and Manipulation – Agentic browsers can be tricked by malicious hidden text or webpage instructions, known as indirect prompt injection. Attackers can embed instructions inside web content or even images that cause the AI browser to execute unauthorised actions, like accessing a user’s email or bank account. An example is given by Brave, who discovered that Perplexity’s Comet could read and act on barely visible hidden text in screenshots, executing hidden commands as if they were user requests. Proof-of-concept (“CometJacking”) attacks by LayerX showed that attackers could command Comet to copy emails, steal calendar data, or send messages from connected accounts.
Cross-Domain Access and Privilege Escalation – AI agents operate with user-level authentication, meaning if hijacked, they can act inside logged-in sessions (e.g., banking, email, or corporate systems). Once poisoned, the AI can perform cross-origin actions that violate traditional web security controls such as same-origin policy (which is a security feature in web browsers to prevent a website from one origin, interacting with or reading data from another origin). This effectively turns the browser into an insider threat, capable of executing commands with the user’s privileges without consent.
Memory and Data Leakage – These AI systems have persistent long-term memory stores, where they retain information across sessions to improve performance and context awareness. Because agentic browsers often retain context or memory between sessions, they can leak sensitive data such as credentials, PII, and internal documents when prompted by malicious commands hidden in webpages. Persistent memory makes these breaches long-lasting, as poisoned information can be reused in future workflows. Agentic AI browsers' memory capabilities can also increase the risk of reidentifying de-identified data. If attackers manage to poison or manipulate this memory – via memory injection or data poisoning – they can influence the AI to recall or infer sensitive information later, potentially reidentifying data that was originally anonymized or de-identified. Agentic AI's ability to accumulate and connect behavioural or contextual data from multiple interactions enhances its effectiveness but also expands the attack surface for privacy breaches. This is a growing concern in compliance and data privacy because current redaction or anonymization methods may not be robust enough against these advanced AI-based reidentification techniques. Mitigating this requires treating memory as untrusted input, monitoring memory recalls carefully, and applying layered security and privacy guardrails to prevent persistent memory manipulation or unauthorised long-term data retention.
Plugin and Supply Chain Exploitation – Many agentic browsers call external APIs or plugins to complete tasks. These integrations create new supply chain attack surfaces. Compromised third-party plugins can gain access to sensitive agent memory or execute privileged actions on enterprise systems.
Privacy and Data Governance Risks – A large academic study (UCL, UC Davis, Mediterranea University) found that AI browser assistants collect and transmit full webpage content, including form data such as medical or financial information, to remote servers without adequate transparency or opt-outs. Some extensions even shared IP addresses and personal identifiers with analytics networks for ad profiling.
Agentic AI browsers can sometimes read redacted information because redaction on digital screens is often visual-only and not truly removing or encrypting the underlying data. Techniques like screenshot capture combined with AI-powered OCR can extract text, even if it is visually obscured or redacted. Malicious cloaked content can trick the AI agent into interpreting or revealing what appears redacted to humans but remains present in the page's source or memory.
Secure redaction techniques to prevent this include:
- True Data Redaction or Masking
- Removing or encrypting sensitive data at the source (e.g., in databases or files) so redacted information cannot be reconstructed or read
- Strip hidden or invisible text (white-on-white, HTML comments) before rendering or feeding to AI
- Require explicit manual user interaction or approval before AI processes sensitive content
- Continuously test AI agents against known redaction bypass techniques and sanitise prompts to remove malicious instructions
Effective redaction against agentic AI browsers demands true removal or encryption rather than simple visual obfuscation, alongside strict security controls and AI behaviour guardrails to prevent unintended data exposure.
Lack of Forensic Oversight and Logging – Traditional SOC tools and audit mechanisms cannot easily detect AI browser actions because they occur outside normal event logging pipelines. Agent behaviour (tab switching, form submission, file access) may leave no explicit logs, creating a visibility gap for defenders.
SECURITY CONSIDERATIONS: SEGREGATION, VALIDATION, OVERSIGHT AND GOVERNANCE
Isolation of Agentic and Standard Browsing Sessions – Agentic browsing should operate in distinct sandboxed environments separate from regular user browsing. Prevent shared cookies, sessions, and authentication tokens to block cross-domain data leakage. Require explicit user consent before granting an AI browser access to stored credentials, enterprise logins, or local files.
Apply Zero Trust principles so each AI session is treated as untrusted by default, with identity revalidation and minimal permissions.
Validation and Filtering of Model Inputs/Outputs – Prompt injection and “UI misinterpretation” are the leading threats. Implement two-tier filtering:
- Input filtering: Strip, sanitize, or hash untrusted webpage content before it is incorporated into an AI model’s prompt.
- Output validation: Compare the AI’s proposed actions with the genuine user request; reject any command that performs additional or unrelated operations (e.g. hidden transfers or data exposure).
Use guardrail models or secondary policies to enforce alignment validation at runtime.
Context and Memory Protection – Memory retention in augmented browsers obviously then expands exposure which is a huge concern. So encryption and sandboxing AI conversational or task memory can provide some control and additional security. Preventing the ability to store this in line with user browsing history provides some additional guard rails. Enforce memory expiry after session termination, preventing reuse of sensitive data across unrelated tasks.
User-in-the-Loop Oversight – Autonomous workflows should in the main require human confirmation for all security-sensitive actions such as logins, purchases, or system changes (and where this is not the case there it is documented and reasoning given including sign off from relevant business unit owners). There needs to require a manual review before the AI clicks on permissions, submits credentials, or executes payments and any other business impactful and critical process. Make autonomous browsing visually distinguishable (e.g., a color-coded AI mode indicator) or other visually, audible aid or automated pop-up warning. This goes some way to reducing the risk of users unintentionally exposing privileged sessions.
Enterprise Policy Enforcement and DLP Integration – Enterprises should treat AI browsers as managed endpoints governed by corporate security tooling. Apply enterprise browser security extensions that enforce Data Loss Prevention (DLP) and block clipboard operations or file transfers containing regulated data or any data that should require oversight and control before any interaction, movement and or change. Make sure any current extension restrictions denying installation also cover the agented AI and enforce policy-based permissions based on identity. Combine browser telemetry is being captured and logged within the SIEM monitoring for cross-session anomaly detection and leverages an already potent oversight tool.
Plugin and Supply Chain Integrity – AI browsers often integrate external APIs and automation tools. Digitally sign all plugin integrations and verify origins during execution. Maintain vetted allowlists for third-party tools; use least-privilege tokens for API calls and regularly review dependency security advisories to counter plugin-level data leaks.
Compliance and Governance Frameworks – Organisations deploying augmented browsers should embed them under AI governance and assurance programs. Perform Data Protection Impact Assessments and AI risk analyses based on their current frameworks and AI policies. Maintain clear accountability for agent actions, including rollback and data erasure protocols. Much like other current key regulatory and legal concerns regarding data exposure (tracking, collection, purging, data lifecycle oversight), there needs to be a concise implemented AI activity attestation and review process to meet regulatory standards (GDPR, PCI, HIPAA).
Authoritative Sources
Seraphic Security – The Rise of Agentic Browsers: A New Frontier in Browser Security (Oct 2025)
SoftwareAnalyst Cyber Research – Agentic Browsers and the New Last Mile in Cybersecurity (Aug 2025)
Brave Security Blog – Indirect Prompt Injection in Perplexity Comet (Aug 2025)
Opera Security Blog – Protected with Opera Neon: Understanding Agentic Browser Security (Oct 2025)
Brave Software – Systemic Security Issues in AI Browsers and Indirect Prompt Injection in Comet
LayerX Security – CometJacking Exploit Report (21 Oct 2025)
Simon Willison (Independent Researcher) – Indirect Prompt Injection Analysis in Perplexity Comet
University College London Study – AI Web Browser Assistants Raise Serious Privacy Concerns (Sep 2025)
Q&A’s
How can script in a website be deployed by AI browser to read email or banking info - is it gaining access to the app or is it telling the browser to read what's on your screen? If it's getting access, how is it doing this? If it's reading from your screen how is the browser sending this info somewhere?
Agentic AI web browsers can enable scripts or malicious prompts in web content to gain access to sensitive data, like emails and banking information with two primary attack routes:
- by leveraging the browser’s privileged access to apps and authenticated sessions. Prompt injection via website scripts tricks the browser’s AI into performing actions or stealing info while logged in, often bypassing standard web protections.
- by extracting and exfiltrating information displayed on the user’s screen using advanced techniques like AI-powered OCR (Optical Character Recognition) and prompt injection. Screen content attacks use screenshot features and AI to read what’s visible or even hidden on your display, sending sensitive data out to attackers.
Both routes exploit the browser’s authority and automation, so attackers don’t need direct app credentials – just a way to guide the AI or grab what’s being shown. It really highlights why agentic browser controls, prompt filtering, and session isolation are critical for security, especially those in industries and environments handling confidential and or large amounts data.
The information obtained by an agentic AI browser is typically sent to the threat actor through covert data exfiltration methods embedded within the browser’s network activity (Background Network Requests, WebSocket Channels, Data Encoding in requests and or the use of third-party services).
How does being able to move from one domain to another result in privilege escalation? Is it hoping that credentials have been reused? Or is it looking at what the same user sees as they move between domains. Is it harvesting creds? Or info? Or both?
With agentic AI browsers, moving from one domain to another can lead to privilege escalation primarily because the AI agent can leverage the user's existing authenticated sessions and reused credentials across sites. It can look at the user's entire browsing context, harvesting both credentials and visible information as it navigates domains (not to dissimilar to the kids in the fridge after a food shop locating and eating with ease all the things hidden that are needed for their lunch boxes to last all week!).
The browser’s AI can pivot autonomously, exploiting credential reuse and session tokens to access multiple domains without repeated authentication. It harvests credentials when available and gathers any accessible data the user can see, effectively combining credential and info theft to escalate privileges across domains. This cross-domain capability allows the agent to perform actions or access data as if it were the user.
How do we know if Ai is agentic? For BPOs I imagine it's by design but if I use Google AI is that agentic? Is all ai accessed via a browser subject to this issue in some form.
Agentic AI is defined by its ability to act autonomously with initiative, reasoning, and adaptability, it plans, makes decisions, and takes multi-step actions with minimal human intervention. Unlike traditional AI that only responds to commands or follows fixed rules, agentic AI can set goals, adapt strategies based on context, and execute tasks independently across different systems or applications. So for BPOs or managed environments, agentic AI is usually by design, deliberately set up to automate workflows and decisions.
In contrast, using Google AI or similar generative AI accessed via a web browser typically is not agentic unless it has autonomous action capabilities beyond just responding to user prompts. Most AI accessed via browsers today are purely reactive (non-agentic) and therefore not subject to the same risks associated with agentic AI like cross-domain automation or privilege escalation as discussed above.
However, any AI embedded in a browser that incorporates autonomous agents capable of navigating multiple sites, accessing session cookies, and executing complex workflows could be classified as agentic and share those security concerns.
So, while not all AI accessed via browsers is agentic, agentic AI by definition combines autonomy and tool use that create broader attack surfaces and risks.
Agentic AI acts autonomously and plans actions, not just responds. BPO agentic AI is by design, for workflow automation. Google AI or typical browser AI is mostly non-agentic (reactive). Only autonomous multi-system AI agents are truly agentic and face those risks outlined above.
In essence, the main considerations are explicit isolation, rigorous user oversight, and continuous telemetry. AI browsers blur the traditional boundary between user and system – addressing that risk requires designing them as audited, semi-autonomous digital agents operating inside Zero Trust boundaries rather than as free-roaming assistants.
If you have a questions, get in touch via the contact form and our TRAM team will get in touch with an answer.
