Activity is understood
3LS sees what the interaction is for and whether sensitive content is involved.
AI controls
AI controls are pre-boundary enforcement for the final question in the flow: What should happen now? Teams can allow, warn, or block by default, and allow, warn, block, or review when policy requires escalation before a prompt, upload, or tool action crosses policy.
Controls turn visibility into action so teams can keep low-risk work moving, intervene before sensitive context leaves, and stop interactions that do not fit policy.
3LS sees what the interaction is for and whether sensitive content is involved.
Teams can weigh business purpose, sensitivity, and risk before deciding what should happen next.
Allow, warn, or block based on the interaction instead of applying the same response to everything.
Operators can see what action was taken and why.
Allow
Routine drafting, coding, and research can continue when the interaction fits expected usage and does not carry sensitive risk.
Warn
When intent or sensitivity raises concern, teams can introduce a warning and guide people toward safer handling.
Block
Prevent sharing or processing that does not fit organizational guardrails, and leave a clear record of the decision.
From the blog
Research on the moments where AI control actually matters: enforcement, approval fatigue, containment, and tool-level policy.

Prompt warnings and approval clicks do not contain a tool-enabled agent. Hard execution policy does.

Approval prompts train muscle memory. Attackers exploit that fatigue to turn helpful agents into a data exfiltration path.

Approval prompts slow risk, but they do not contain it. Only OS-level sandboxes limit the blast radius when agents act.