The story is not a rogue user. It is the exception path. Reports say FOUO contracting files were pasted into public ChatGPT under a temporary exemption, triggering alerts and a DHS review.
AI Security Intelligence
Latest research, vulnerability analysis, and threat intelligence from the AI security frontlines. Expert insights for security professionals defending against AI-era attacks.
Featured Articles
Essential reading for AI security professionals
The Enterprise Agent Control Plane from Toggles to Policy as Code
If agent safety lives in user settings, you do not have policy. You have uneven risk decisions across teams.
Approval Fatigue Is an Enterprise Risk in Agent Sandboxes
If approvals and sandboxes live in personal settings, policy becomes a suggestion. Fatigue turns security decisions into muscle memory.
Agent Sandboxes Are the Containment Boundary
Approval prompts slow risk, but they do not contain it. Only OS-level sandboxes limit the blast radius when agents act.
Agentic Browser Prompt Injection and the Lethal Trifecta
When a browser agent can read, decide, and act, every page becomes a potential instruction set. Brave's Perplexity Comet research shows how hidden text triggers cross-site actions and data loss.
MCP Tool Poisoning Turns Descriptions Into Exfiltration Paths
Tool metadata is now prompt content. If it is untrusted, it can override intent and leak data.
Two fake AI extensions siphoned ChatGPT and DeepSeek conversations on a schedule. Browser add-ons are now AI data pipelines.
One copy-paste can clone a repo, read private email, and send it out. This is a real vulnerability, not a demo.
Model diversity sounds good until data moves to a third party. One toggle can change residency, contracts, and regulatory exposure.
An attacker nearly shipped a compromised AWS Toolkit update to 8.2 million developers. Extension stores are now supply chain infrastructure.
Clicking Yes to AI Disaster and the Approval Fatigue Crisis
Approval prompts train muscle memory. Attackers exploit that fatigue to turn helpful agents into a data exfiltration path.
The MCP Server That Wiped Production and the AI Tooling Risk
A routine cleanup request deleted years of data. The issue was over-privileged tools and no guardrails.
The 2024 Prompt Injection Wave and Lessons from CVE-2024-5184
A Gmail integration became an agent takeover path. Prompt injection is now a system vulnerability, not a content issue.
The $4.88M Question on AI PII Exposure
A single paste can become a breach. From Samsung's ChatGPT incident to training data extraction, 26% of organizations are feeding sensitive data to public AI.
Shadow AI and the 485% Surge in Uncontrolled Usage
If half your org will keep using AI even if banned, enforcement is a myth. Governance has to be built into workflows.
All Articles
Comprehensive AI security coverage
If CISA Can Put FOUO in ChatGPT, Your Exception Process Is the Breach
The story is not a rogue user. It is the exception path. Reports say FOUO contracting files were pasted into public ChatGPT under a temporary exemption, triggering alerts and a DHS review.
Read Article
The Enterprise Agent Control Plane from Toggles to Policy as Code
If agent safety lives in user settings, you do not have policy. You have uneven risk decisions across teams.
Read Article
Approval Fatigue Is an Enterprise Risk in Agent Sandboxes
If approvals and sandboxes live in personal settings, policy becomes a suggestion. Fatigue turns security decisions into muscle memory.
Read Article
Agent Sandboxes Are the Containment Boundary
Approval prompts slow risk, but they do not contain it. Only OS-level sandboxes limit the blast radius when agents act.
Read Article
Agentic Browser Prompt Injection and the Lethal Trifecta
When a browser agent can read, decide, and act, every page becomes a potential instruction set. Brave's Perplexity Comet research shows how hidden text triggers cross-site actions and data loss.
Read Article
MCP Tool Poisoning Turns Descriptions Into Exfiltration Paths
Tool metadata is now prompt content. If it is untrusted, it can override intent and leak data.
Read Article
PromptPwnd Shows the Agentic CI/CD Supply Chain Risk
Untrusted repo content can steer AI agents that hold secrets. That collapses the boundary between input and execution.
Read Article
Prompt Poaching in the Chrome Web Store and AI Chat Exfiltration
Two fake AI extensions siphoned ChatGPT and DeepSeek conversations on a schedule. Browser add-ons are now AI data pipelines.
Read Article
Claude.ai Email Exfiltration Shows How Assistants Can Leak Inboxes
One copy-paste can clone a repo, read private email, and send it out. This is a real vulnerability, not a demo.
Read Article
M365 Copilot and Claude Create a Compliance Bomb
Model diversity sounds good until data moves to a third party. One toggle can change residency, contracts, and regulatory exposure.
Read Article
The AWS VSCode Supply Chain Near Miss That Almost Reached Millions
An attacker nearly shipped a compromised AWS Toolkit update to 8.2 million developers. Extension stores are now supply chain infrastructure.
Read Article
Clicking Yes to AI Disaster and the Approval Fatigue Crisis
Approval prompts train muscle memory. Attackers exploit that fatigue to turn helpful agents into a data exfiltration path.
Read Article
The MCP Server That Wiped Production and the AI Tooling Risk
A routine cleanup request deleted years of data. The issue was over-privileged tools and no guardrails.
Read Article
The 2024 Prompt Injection Wave and Lessons from CVE-2024-5184
A Gmail integration became an agent takeover path. Prompt injection is now a system vulnerability, not a content issue.
Read Article
The $4.88M Question on AI PII Exposure
A single paste can become a breach. From Samsung's ChatGPT incident to training data extraction, 26% of organizations are feeding sensitive data to public AI.
Read Article
Shadow AI and the 485% Surge in Uncontrolled Usage
If half your org will keep using AI even if banned, enforcement is a myth. Governance has to be built into workflows.
Read Article
OWASP LLM Top 10 2025 and Why It Matters
A practical walkthrough of the OWASP LLM Top 10 and the control gaps it exposes for enterprise AI deployments.
Read Article
EU AI Act Compliance Timeline for 2025-2027
A phased view of what applies when: February 2025 prohibitions, August 2025 GPAI obligations, and 2026-2027 enforcement milestones.
Read Article
Multimodal Prompt Injection When Images Hide Malicious Commands
Visual prompt injection shows that images can carry hidden instructions. Text-only filters are no longer enough.
Read Article
Weekly AI Threat Intelligence Briefings
Get updates when we publish new research on emerging AI attacks, supply chain threats, and defense strategies.
No spam. Unsubscribe anytime.