Back to all articles
Compliance January 2, 2025 9 min read

EU AI Act Compliance Timeline for 2025-2027

A phased view of what applies when: February 2025 prohibitions, August 2025 GPAI obligations, and 2026-2027 enforcement milestones.

Article focus

Treatment: photo

Image source: Diliff via Wikimedia Commons

License: CC BY-SA 3.0

European Parliament hemicycle in session
Wikimedia Commons photo used for the EU AI Act compliance article. Diliff via Wikimedia Commons

Executive summary

The EU AI Act timeline matters because enterprises need operational controls well before the final deadline. Vendor paperwork will not be enough if the organization cannot show how AI use is governed, observed, and constrained in practice.

What the Source Shows in the EU AI Act

The EU AI Act is a staged regulatory framework, not a single deadline. It entered into force on August 1, 2024, then layered obligations over time: prohibited practices and literacy requirements from February 2, 2025, governance rules and GPAI obligations from August 2, 2025, most remaining duties from August 2, 2026, and some longer high-risk transitions through August 2, 2027.

That timeline is the source evidence for the rest of the article. It shows when the law expects organizations to move from policy statements to operational proof.

What This Means for Organizations

Enterprises should read the AI Act as a demand for operational evidence. It is not enough to ask a vendor whether a model is compliant or whether documentation exists. The organization still has to know how the model is used, which workflows are high risk, what data is flowing into it, which human oversight points exist, and whether the resulting system can be explained to auditors and regulators.

The practical consequence is that compliance lives in the deployed workflow, not the vendor packet. By the time the final 2026 or 2027 deadlines arrive, the hard work should already be done: use-case classification, supplier mapping, internal policy, logging, and runtime controls.

How the Control and Risk Model Fits the Act

AI governance is hard because the system boundary is blurry. A model provider owns some documentation and some safeguards. The enterprise owns the actual deployment context, including connected data, human approvals, business process integration, and the consequences of misuse. That split makes AI inherently difficult to govern with conventional procurement-only controls.

In practice, the highest-risk failures are rarely about whether a policy exists on paper. They are about whether the organization can constrain real workflows, observe what the system touched, and intervene when a use case crosses a regulatory or operational threshold.

How 3LS Helps with Runtime Evidence

3LS gives enterprises the runtime layer regulators and auditors will actually care about: visibility into where AI is active, evidence of what data moved through live workflows, and policy enforcement around higher-risk interactions. That helps turn AI governance from a vendor questionnaire into a control system that can support oversight, audit, and internal review.

For the AI Act, prompts, uploads, OAuth grants, and tool delegation are runtime evidence points, not side details, because they show where data and authority actually moved.

The value is not that 3LS replaces legal or risk teams. It is that it gives them operational evidence tied to the use cases they actually have to govern. If the organization cannot show what required review, what was blocked, and which workflows involved sensitive data or tool access, compliance claims remain fragile.

What To Operationalize Next

Use 2025 and 2026 as build years for evidence, not just documentation. Classify AI workflows, define which ones are high risk, map model suppliers, establish literacy and review obligations, and make sure security and compliance teams can see how live systems are using data and tools.

If your AI governance program cannot produce runtime evidence, it will struggle under the Act even if the paperwork looks complete.

Continue reading

Related articles

Browse all