EU AI Act for Deployers: Complete 2026 Guide
The EU AI Act is the world's first comprehensive AI regulation, and if your company uses AI tools — even third-party ones like ChatGPT, Copilot, or Midjourney — you are classified as a deployer under the law. This guide covers everything you need to know.
Who Is a Deployer?
Article 3(4) of the EU AI Act defines a deployer as:
A natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.
In plain terms: if your organization uses any AI system in a professional context, you are a deployer. This includes:
- Using ChatGPT or Claude for customer support
- Deploying GitHub Copilot for your development team
- Running AI-powered HR screening tools
- Using AI analytics platforms for business decisions
The Risk-Based Framework
The EU AI Act classifies AI systems into four risk tiers. Your obligations as a deployer depend on which tier your AI tools fall into.
Prohibited AI Practices (Article 5)
Certain AI uses are completely banned. These include social scoring systems, exploitation of vulnerable groups, and unauthorized real-time biometric identification. Penalties reach up to €35 million or 7% of global turnover.
High-Risk AI Systems (Annex III)
AI used in employment, credit scoring, law enforcement, education, and critical infrastructure is classified as high-risk. Deployers of high-risk systems have the most extensive obligations, including conducting a Fundamental Rights Impact Assessment (FRIA).
Limited Risk — Transparency Obligations
AI systems that interact directly with humans (chatbots), generate synthetic content (deepfakes), or perform emotion recognition must comply with transparency requirements under Article 50.
Minimal Risk
AI systems that don't fall into the above categories have no specific obligations under the Act, though best practices still apply.
Key Deadlines for Deployers
| Deadline | What Happens | |----------|-------------| | Feb 2, 2025 | AI literacy obligation (Art. 4) — already in force | | Aug 2, 2025 | Prohibited practices ban takes effect | | Aug 2, 2026 | High-risk system requirements fully enforceable | | Aug 2, 2027 | Annex I product safety integration |
The August 2, 2026 deadline is the critical one for most deployers. By this date, you must have full compliance programs in place for any high-risk AI systems you use.
Your Core Obligations as a Deployer
1. AI Literacy (Article 4) — Due Now
Every organization using AI must ensure its staff has sufficient AI literacy. This isn't optional — it's been enforceable since February 2, 2025.
What this means in practice:
- Train employees who interact with AI systems
- Document training completion and competency levels
- Tailor training to the specific AI tools used and their risk levels
2. Transparency (Article 50)
If your AI systems interact with the public, you must:
- Inform users they are interacting with AI
- Label AI-generated content appropriately
- Mark synthetic media (deepfakes) as AI-generated
3. Human Oversight (Article 14)
For high-risk AI systems, deployers must:
- Ensure qualified humans can oversee AI decisions
- Implement mechanisms to override or reverse AI outputs
- Monitor the AI system's performance in real-time
4. Data Governance
High-risk AI deployers must ensure:
- Input data is relevant and representative
- Data processing complies with GDPR
- Bias in training data is identified and mitigated
5. Fundamental Rights Impact Assessment
Before deploying a high-risk AI system, you must conduct a FRIA that assesses the system's impact on fundamental rights. This is similar to a DPIA under GDPR but focused specifically on AI risks.
Learn more about FRIAs in our dedicated guide.
Penalties for Non-Compliance
The EU AI Act has three penalty tiers:
- Tier 1 — Prohibited practices: Up to €35M or 7% of global turnover
- Tier 2 — High-risk violations: Up to €15M or 3% of global turnover
- Tier 3 — Other obligations: Up to €7.5M or 1% of global turnover
These are maximums — the actual penalty depends on the severity, duration, and number of affected persons.
Getting Started: A 5-Step Action Plan
-
Inventory your AI tools — List every AI system your organization uses. Don't forget embedded AI in SaaS products.
-
Classify risk levels — Determine which risk category each tool falls into. Use the Complior AI Registry as a reference.
-
Conduct gap analysis — Compare your current practices against EU AI Act requirements for each risk level.
-
Implement controls — Roll out training programs, transparency measures, oversight mechanisms, and documentation.
-
Monitor continuously — Compliance isn't a one-time project. Set up ongoing monitoring and regular re-assessments.
How Complior Helps
Complior automates the hardest parts of AI Act compliance:
- Auto-detection scans your codebase and SaaS stack to find every AI tool
- Risk classification maps each tool to the correct risk tier with specific article references
- Document generation produces FRIAs, transparency notices, and compliance policies in one click
- Ongoing monitoring tracks your compliance posture and alerts you to changes
Run a free scan now:
npx complior scan
Or check your obligations online — no signup required.
Ready to check your AI compliance?
Scan your AI tools in 30 seconds. No signup required.
$ complior scanWhat's Next?
The August 2, 2026 deadline is approaching. The Commission's Digital Omnibus Proposal may delay some requirements, but the legally binding date remains unchanged. Don't wait for regulatory certainty — start preparing now.
This article is for informational purposes only and does not constitute legal advice. For specific guidance on your compliance obligations, consult a qualified legal professional.