EU AI Act Compliance Timeline for 2025-2027
The EU AI Act is not a single deadline. It is a phased compliance program with real dates: February 2, 2025 (prohibited practices and AI literacy), August 2, 2025 (governance rules and GPAI obligations), August 2, 2026 (most requirements), and August 2, 2027 (extended transition period for certain high-risk systems). The risk is missing the milestones, not the headline.
Executive Summary
The EU AI Act entered into force on August 1, 2024 and is fully applicable on August 2, 2026, with phased obligations in between. Prohibited practices and AI literacy obligations apply from February 2, 2025; governance rules and GPAI obligations apply from August 2, 2025; and some high-risk systems have an extended transition period to August 2, 2027. Treat 2025 as the year to build compliance foundations.
Why the Timeline Matters More Than the Headline
The AI Act is structured to phase in responsibilities over time. That means compliance is a program with multiple checkpoints, not a single readiness date. If your program waits for 2026, you are already late on the first two milestones. Security and legal teams should map each checkpoint to policy, procurement, and technical controls.
Key Milestones for 2025 to 2027
The EU AI Act entered into force on August 1, 2024, but obligations apply on a staged schedule. The first wave of provisions, including prohibited practices and AI literacy requirements, applies from February 2, 2025. The next major checkpoint is August 2, 2025, when governance rules and obligations for general-purpose AI (GPAI) models begin. Most remaining requirements apply from August 2, 2026, with additional time granted for certain high-risk systems into August 2, 2027.
Milestone Snapshot
- February 2, 2025: Prohibited practices and AI literacy obligations apply.
- August 2, 2025: GPAI model obligations begin.
- August 2, 2026: Most AI Act requirements apply.
- August 2, 2027: Remaining deadlines for specific high-risk systems.
What Changes for Enterprise AI Programs
Once the GPAI obligations apply, enterprises need vendor clarity on model lineage, documentation, and system-level risk controls. For high-risk systems, the AI Act expects risk management, data governance, transparency, logging, and human oversight. These are governance and engineering tasks, not just legal checkboxes.
How to Use 2025 to Get Ahead
The safest approach is to treat 2025 as the build year: classify AI use cases, document model suppliers, establish AI literacy training, and set up runtime controls for data exposure and tool access. Waiting for 2026 compresses all of that into a narrow window.
How AARSM Helps
AARSM provides the runtime evidence auditors will ask for: what data moved, what tools ran, and what was blocked. That is the foundation for AI Act compliance.
About This Analysis
This analysis references official EU and Commission timelines for the AI Act and public guidance on staged compliance dates.