Runtime control layer

See and control AI use before data or authority leaves.

3LS Platform is the runtime control layer for enterprise AI use: policy, control, and observability before prompts, uploads, OAuth grants, and delegated tool calls leave your control across Codex, Claude, browsers, and MCP-connected systems.

Runtime decision record

Prompt with customer export

Input
CSV upload to external assistant
Detected
PII, customer records, account notes
Policy
Warn, retain evidence, require acknowledgement
Outcome
User warning issued
Mechanism:
On-device agent intercepts AI activity before it leaves the endpoint.
Privacy:
Detections and policy decisions stay within your organization.

Runtime control capabilities

A small set of runtime decisions security teams can explain, audit, and apply consistently.

AI visibility

See how people, assistants, and agentic tools are using AI across the organization.

Prompt classification

Understand whether AI is being used for drafting, coding, research, data handling, or higher-risk workflows.

Sensitive data detection

Detect PII, secrets, and restricted content before it becomes an incident or a compliance problem.

Actionable controls

Allow, warn, block, or escalate before sensitive context moves through prompts, uploads, OAuth grants, or delegated tool actions.

Detect and manage shadow AI usage

Discover unmanaged AI use across browsers, assistants, coding tools, and agentic workflows before it becomes an exposure problem.

Recognize real tools in use

Identify usage across tools like Codex, Claude, browser-based assistants, MCP-connected workflows, and other agentic tools.

Separate approved from unmanaged use

Understand which tools and workflows align with policy and which ones need review, coaching, or controls.

See trends before they become incidents

Track adoption, risky behavior, and sensitive usage patterns so security teams can act early.

Evidence trace

Unmanaged browser assistant

Source
Browser assistant in finance workspace
Classified as
Data handling with external destination
Policy result
Needs review Evidence retained

Detect what matters

Focus operator attention on the usage, content, and actions that actually change risk.

Understand intent

Classify whether AI is being used for drafting, coding, research, summarization, or sensitive data handling.

  • Drafting and editing
  • Coding and agentic development
  • Research and synthesis

Detect sensitive content

Highlight when prompts, outputs, or tool actions involve PII, credentials, or restricted internal data.

  • PII and customer records
  • Secrets and tokens
  • Restricted internal material

Control risky behavior

Apply the right response based on context, from simple visibility to warnings, blocks, and escalation.

  • Allow or monitor
  • Warn and review
  • Block and escalate

Evidence for operators, clear outcomes for teams

Give security teams clear evidence and effective controls without turning every AI interaction into a manual review queue.

See usage patterns clearly.

Understand which tools are managed and which are shadow AI.

Respond to risky behavior with consistent controls.

Keep a clear audit trail of findings and outcomes.

Operator evidence
Finding
Sensitive data detected in outbound AI workflow
Classification
Data handling via unmanaged assistant
Outcome
Blocked and logged for review

Bring AI use into view

Detect and manage shadow AI usage, understand how tools like Codex and Claude are being used, and apply the right controls with confidence.