Supply Chain January 12, 2026 9 min read

Prompt Poaching in the Chrome Web Store and AI Chat Exfiltration

Two fake AI extensions siphoned ChatGPT and DeepSeek conversations on a schedule. Browser add-ons are now AI data pipelines.

Browser extension supply chain risk

Executive Summary

OX Security reported two malicious Chrome extensions that impersonated a popular AI sidebar tool. The extensions siphoned ChatGPT and DeepSeek conversations and browser tab URLs, then exfiltrated the data in batches. As of December 30, 2025, the extensions were still live in the Chrome Web Store.

Browser extensions sit inside the browser, see what users see, and often receive broad permissions by default. Attackers used that trust to read AI conversations and export them. The latest example is a campaign researchers call "prompt poaching" - extensions that siphon AI chats to attacker infrastructure.

What Happened

OX Security identified two AI-themed Chrome extensions that impersonated a legitimate tool from AITOPIA. The malicious versions looked and behaved like a standard AI sidebar, but added hidden surveillance logic. Their combined install base exceeded 900,000 users.

Affected Extensions

  • Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI (extension ID: fnmihdojmnkclgjpcoonokmkhjpjechg)
  • AI Sidebar with Deepseek, ChatGPT, Claude and more (extension ID: inhcgfpbfdjbjogdfjbclgolkmhnooop)

How the Exfiltration Worked

The mechanics are simple, which is exactly why this scales. The extensions requested permission to collect "anonymous, non-identifiable analytics data." In practice, that permission enabled broad access to page content and browsing activity. The malware monitored tabs, detected ChatGPT and DeepSeek pages, and scraped prompts and responses directly from the DOM. Data was cached locally and uploaded in batches approximately every 30 minutes.

Observed Data Collected

  • Full ChatGPT and DeepSeek conversations
  • URLs of open browser tabs
  • Session identifiers and related metadata
  • Internal enterprise URLs and research queries

Why This Matters for Enterprises

If you allow extensions, you are already in the blast radius. AI conversations are no longer casual chat. They include source code, legal strategy, incident response notes, and proprietary business context. An extension that can read that content becomes an enterprise-grade data loss channel.

Operational Risk

Sensitive conversations can surface system design details, architectural diagrams, credentials, or internal URLs that accelerate lateral movement.

Compliance Risk

Exfiltrated prompts can contain regulated data (PII, PHI, financial data) that triggers reporting obligations and contractual violations.

Indicators and Infrastructure (From OX Security)

The researchers reported data collection to attacker-controlled endpoints. If you operate enterprise monitoring, add these indicators to your detection and response workflows.

C2 endpoints:
deepaichats.com
chatsaigpt.com

Immediate Actions for Security Teams

The window to respond is short once chats start leaving the browser.

Priority Checklist

1.
Inventory extensions: Use MDM or browser management to enumerate installed extensions and remove the identified IDs.
2.
Review AI tool usage: Identify teams using ChatGPT or DeepSeek and assess what data was discussed in-browser.
3.
Harden browser policies: Restrict extension installation to an approved list, and enforce least-privilege permissions.
4.
Monitor outbound traffic: Add indicators to SIEM and alert on data egress to newly registered or low-reputation domains.

How AARSM Helps

AARSM correlates browser activity with AI tool usage and blocks exfiltration patterns before chats leave the browser.


About This Analysis

This analysis is based on public reporting from OX Security and subsequent coverage in the security press. Dates reflect the public disclosure timeline as of December 30, 2025.

Research contributed by the Three Laws Security Research Team

Contact: research@threelawssecurity.com

Related Articles