Back to all articles
Supply Chain January 12, 2026 9 min read

Prompt Poaching in the Chrome Web Store and AI Chat Exfiltration

Two fake AI extensions siphoned ChatGPT and DeepSeek conversations on a schedule. Browser add-ons are now AI data pipelines.

Article focus

Treatment: photo

Image source: AHollender (WMF) via Wikimedia Commons

License: CC BY-SA 4.0

Chrome Web Store listing for a Wikipedia browser extension
Wikimedia Commons Chrome Web Store screenshot used for the extension-based prompt poaching article. AHollender (WMF) via Wikimedia Commons

Executive summary

Malicious browser extensions can turn AI chat into a silent data-loss channel. When an extension can read prompts, responses, and browser context, the issue is not just extension malware. It is the enterprise decision to treat browser AI tooling as low-risk convenience software.

Browser extensions sit inside the browser, see what users see, and often receive broad permissions by default. Attackers used that trust to read AI conversations and export them. The latest example is a campaign researchers call "prompt poaching" - extensions that siphon AI chats to attacker infrastructure.

Malicious Chrome Extensions Turned AI Chats Into a Stealth Exfiltration Channel

OX Security identified two AI-themed Chrome extensions that impersonated a legitimate tool from AITOPIA. The malicious versions looked and behaved like a standard AI sidebar, but added hidden surveillance logic. Their combined install base exceeded 900,000 users.

Affected Extensions

  • Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (extension ID: fnmihdojmnkclgjpcoonokmkhjpjechg)
  • AI Sidebar with Deepseek, ChatGPT, Claude and more (extension ID: inhcgfpbfdjbjogdfjbclgolkmhnooop)

OX Security Shows the Extensions Reading ChatGPT and DeepSeek Conversations From the Browser

The mechanics are simple, which is exactly why this scales. The extensions requested permission to collect "anonymous, non-identifiable analytics data." In practice, that permission enabled broad access to page content and browsing activity. The malware monitored tabs, detected ChatGPT and DeepSeek pages, and scraped prompts and responses directly from the DOM. Data was cached locally and uploaded in batches approximately every 30 minutes.

Observed Data Collected

  • Full ChatGPT and DeepSeek conversations
  • URLs of open browser tabs
  • Session identifiers and related metadata
  • Internal enterprise URLs and research queries

The Organizational Consequence Is Prompt Theft, Context Theft, and Data Loss at Scale

If you allow extensions, you are already in the blast radius. AI conversations are no longer casual chat. They include source code, legal strategy, incident response notes, and proprietary business context. An extension that can read that content becomes an enterprise-grade data loss channel.

Operational Risk

Sensitive conversations can surface system design details, architectural diagrams, credentials, or internal URLs that accelerate lateral movement.

Compliance Risk

Exfiltrated prompts can contain regulated data (PII, PHI, financial data) that triggers reporting obligations and contractual violations.

OX Security’s Indicators Point to Attacker-Controlled Infrastructure for Browser-Extension Exfiltration

The researchers reported data collection to attacker-controlled endpoints. If you operate enterprise monitoring, add these indicators to your detection and response workflows.

C2 endpoints:
deepaichats.com
chatsaigpt.com

Operational Next Step: Inventory AI-Adjacent Extensions and Lock Down Install Paths

The window to respond is short once chats start leaving the browser.

Priority Checklist

1.
Inventory extensions: Use MDM or browser management to enumerate installed extensions and remove the identified IDs.
2.
Review AI tool usage: Identify teams using ChatGPT or DeepSeek and assess what data was discussed in-browser.
3.
Harden browser policies: Restrict extension installation to an approved list, and enforce least-privilege permissions.
4.
Monitor outbound traffic: Add indicators to SIEM and alert on data egress to newly registered or low-reputation domains.

3LS Belongs in the Browser-Extension Control Plane for Runtime Visibility

3LS gives security teams a policy and evidence layer around AI use in the browser: which extensions are approved, where risky permissions appear, and when prompts or conversations are exposed to extension-controlled surfaces that need review.

Continue reading

Related articles

Browse all