August 2, 2026 Deadline: What AI Deployers Must Do Now
On August 2, 2026, the EU AI Act's most impactful provisions take effect. High-risk AI system requirements become fully enforceable, and organizations that aren't prepared face penalties of up to €15 million or 3% of global annual turnover.
With less than four months to go, here's exactly what you need to do.
What Changes on August 2, 2026?
The EU AI Act follows a phased implementation schedule. Here's where we stand:
- Feb 2, 2025 — AI literacy requirements (Art. 4) — already in force
- Aug 2, 2025 — Prohibited AI practices ban — already in force
- Aug 2, 2026 — High-risk AI system requirements — coming next
- Aug 2, 2027 — Annex I product safety integration
The August 2026 milestone is when Annex III high-risk requirements become enforceable. This is the deadline that affects the most organizations.
What About the Digital Omnibus Proposal?
In November 2025, the European Commission proposed delaying some deadlines: Annex III from August 2026 to December 2027, and Annex I from August 2027 to August 2028. The IMCO/LIBE committees voted 101-9 in favor.
However, trilogue negotiations have not yet started. Until they conclude and a final text is adopted, the legally binding date remains August 2, 2026.
Do not rely on the Omnibus Proposal as reason to delay compliance. If trilogue fails or stalls, the original deadline stands. Organizations that waited will have zero preparation time.
Our recommendation: prepare for August 2026 and treat any delay as bonus time.
Who Is Affected?
Any organization that deploys AI systems classified under Annex III of the EU AI Act. This covers AI used in:
- Employment and HR — AI-powered recruitment, performance evaluation, promotion decisions
- Education — Automated grading, student admission systems
- Credit and financial services — AI credit scoring, insurance risk assessment
- Law enforcement — Risk assessment tools, predictive policing
- Critical infrastructure — AI managing energy, water, transport systems
- Migration and border control — Document verification, visa processing
If your organization uses any AI tool in these domains — even third-party SaaS products — you have obligations as a deployer.
Your 120-Day Action Plan
Days 1–14: Inventory and Assessment
Inventory every AI tool your organization uses. This includes:
- Obvious ones: ChatGPT, Copilot, Midjourney
- Embedded AI: CRM intelligence features, automated email tools, analytics platforms
- Department-specific: HR screening tools, financial modeling software
Use the Complior AI Registry to look up risk classifications for common tools.
Days 15–30: Risk Classification
For each tool, determine:
- Which Annex III category applies (if any)
- The specific articles and obligations that apply to your role as deployer
- Whether a FRIA is required
Days 31–60: Documentation Sprint
Generate the required documentation:
- Fundamental Rights Impact Assessments for every high-risk tool
- Transparency notices for public-facing AI
- Human oversight procedures documenting how decisions can be reviewed and overridden
- Data governance policies covering input data quality and bias monitoring
Days 61–90: Implementation
- Deploy human oversight mechanisms — ensure every high-risk AI decision has a qualified human reviewer
- Implement logging and traceability — maintain records of AI system outputs for regulatory inspection
- Complete AI literacy training for all affected staff (if not done already)
- Set up incident reporting processes for serious AI incidents
Days 91–120: Validation and Monitoring
- Conduct internal audits against the full obligation set
- Generate your audit package — all documentation, training records, and compliance evidence in one bundle
- Set up continuous monitoring to detect compliance drift
- Brief executive leadership and board on compliance status
The Cost of Inaction
Let's be concrete about the risk:
| Violation Type | Maximum Penalty | |---------------|----------------| | Prohibited practices | €35M or 7% of global turnover | | High-risk non-compliance | €15M or 3% of global turnover | | Misleading information to authorities | €7.5M or 1% of global turnover |
For a company with €500M annual revenue, a Tier 2 violation means a potential fine of €15 million. And penalties apply per violation — multiple non-compliant AI systems multiply the exposure.
Beyond fines, non-compliance creates:
- Reputational damage — regulatory actions become public
- Business disruption — authorities can order AI systems taken offline
- Competitive disadvantage — compliant competitors will win regulated contracts
How Complior Gets You Ready
Complior is built specifically for deployers facing the August 2026 deadline:
- Scan — Auto-detect every AI tool in your organization in 30 seconds
- Classify — Instant risk classification with article-by-article obligation mapping
- Generate — One-click FRIA, transparency notices, and compliance documentation
- Monitor — Continuous compliance tracking with drift alerts
npx complior scan
Or run a free compliance check online — no signup required.
Ready to check your AI compliance?
Scan your AI tools in 30 seconds. No signup required.
$ complior scanKey Takeaways
- August 2, 2026 is the legally binding deadline — the Omnibus Proposal may or may not change this
- Deployers have real obligations — this isn't just a provider problem
- Start with inventory — you can't comply with what you don't know about
- 120 days is tight but achievable — focus on high-risk systems first
- Automation is essential — manual compliance at scale is impractical
Don't wait. Start your compliance journey today.
This article is for informational purposes only and does not constitute legal advice. For specific guidance on your compliance obligations, consult a qualified legal professional.