EU AI Act — Fully Structured

108 obligations. Every deadline. Every penalty.

The EU AI Act decoded into actionable data — filterable, searchable, and machine-readable. Free and open.

0
Obligations
€35M
Max Penalty
Aug 2, 2026
Key Deadline
0
Risk Levels
Scan your project →
Enforcement Timeline

What's in force, what's coming next

13 Jun 2024
EU AI Act adopted by European Parliament and Council
Begin processing regulation content. Start building compliance framework.
Completed
12 Jul 2024
Published in Official Journal of the European Union
Official text available. Finalize regulatory parsing.
Completed
1 Aug 2024
Entry into force (20 days after publication)
Regulation is law. Countdown timers start for all phases.
Completed
2 Feb 2025
Phase 1: Prohibited practices (Art. 5) and AI literacy (Art. 4) apply
Prohibited practices screening and AI literacy modules must be live. Article 5 checklist and training templates must be available.
Completed
2 Feb 2025
Commission published Guidelines on Prohibited AI Practices and AI System Definition
Update prohibited practices screening logic and definition matching based on official guidelines.
Completed
Obligations Explorer

108 requirements, filterable

Click any row to expand full details, evidence requirements, and automation approach.

All rolesProviderDeployerProhibitedHighGPAILimitedMinimalCriticalHigh sev.
20 obligations match
Article 26(1)-(5)
Deployer: Use High-Risk AI Per Instructions and Monitor
Deployerhigh
Description

Use system per instructions, assign human oversight, ensure input data quality, monitor operations, keep logs 6+ months.

What to do
  • Implement provider instructions
  • Assign named human oversight persons
  • Verify input data quality
  • Active monitoring
  • Log retention min 6 months
What NOT to do
  • Do NOT use high-risk AI contrary to provider's instructions
  • Do NOT skip monitoring of AI system outputs and performance
Evidence required

Implementation evidence, oversight assignments, monitoring logs, log retention records

Deadline

2026-08-02

Article 26(10) / Article 21
Cooperate with Regulatory Authorities
Bothall
Description

Provide information, documentation, and access to AI systems upon request from competent authorities.

What to do
  • Designate regulatory contact person
  • Maintain accessible compliance documentation
  • Respond promptly and completely to requests
What NOT to do
  • Do NOT obstruct regulatory authority inspections or information requests
  • Do NOT destroy evidence relevant to compliance investigations
Evidence required

Designated contact details, documentation access procedures

Deadline

2026-08-02

Article 5(1)(e)
Prohibited: Untargeted Facial Image Scraping
Bothunacceptable
Description

Verify no AI system creates or expands facial recognition databases through untargeted scraping from internet or CCTV.

What to do
  • Check if any AI component scrapes facial images
  • Verify face databases are not built from untargeted web/CCTV scraping
What NOT to do
  • Do NOT scrape facial images from the internet or CCTV without targeted lawful basis
  • Do NOT build or expand facial recognition databases through mass collection
Evidence required

Facial data sourcing audit

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 12
Implement Automatic Event Logging
Providerhigh
Description

High-risk AI must automatically record events (logs) for traceability: periods of use, input references, outputs, human interventions.

What to do
  • Design logging from architecture phase
  • Log: timestamps, inputs, outputs, human oversight actions, errors
  • Integrity protection on logs
  • Provide deployer guidance on log access
What NOT to do
  • Do NOT deploy high-risk AI without automatic event logging enabled
  • Do NOT log only errors — log all events specified by provider
Evidence required

Architecture docs showing logging, sample logs, retention policy, integrity mechanism

Deadline

2026-08-02

Article 5(1)(f)
Prohibited: Workplace/Education Emotion Recognition
Bothunacceptable
Description

Verify no AI system infers emotions of persons in workplace or educational settings (except for medical or safety reasons).

What to do
  • Identify any emotion recognition AI
  • Verify it is NOT used in workplace or education context
  • If medical/safety exception claimed, document justification
What NOT to do
  • Do NOT use emotion recognition AI in workplace or educational settings
  • Do NOT infer employee mood, stress, or engagement via facial/voice analysis (exception: medical/safety)
Evidence required

Emotion recognition context audit

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 5(1)(g)
Prohibited: Sensitive Biometric Categorization
Bothunacceptable
Description

Verify no AI system categorizes persons based on biometric data to infer sensitive characteristics (race, political opinions, religion, sexual orientation).

What to do
  • Check if any AI uses biometric data to infer race, religion, political views, or sexual orientation
  • Document absence of such functionality
What NOT to do
  • Do NOT use biometric data to categorize individuals by race, religion, political opinion, or sexual orientation
  • Do NOT infer sensitive attributes from biometric inputs
Evidence required

Biometric categorization screening

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 4
Ensure AI Literacy of Staff
Bothunacceptablehighlimitedminimalgpai
Description

Every company that uses or builds AI must train their staff so they understand AI risks and responsible use. Training must be proportionate to role and risk level.

What to do
  • Conduct skills gap assessment
  • Develop role-based AI literacy training
  • Document training completion
  • Annual refresh cycle
What NOT to do
  • Do NOT allow staff to use AI systems without any training
  • Do NOT treat AI literacy as a one-time event — it requires annual refresh
  • Do NOT apply same training level to all roles — tailor to responsibility
Evidence required

Training records, curriculum, policy document, completion certificates

Deadline

2025-02-02

Penalty

€15M / 3% turnover

Scanner checks for: AI-LITERACY.md or ai-training-policy.* files in project root. Verifies document contains required sections (scope, training levels, schedule, records). Template auto-generates if missing.
Article 5
Do Not Deploy Prohibited AI Systems
Bothunacceptable
Description

Screen all AI systems against Article 5 prohibited practices. Eight categories of banned AI uses.

What to do
  • Audit all AI systems against Art. 5 list
  • Document screening results
  • Establish pre-deployment screening process
What NOT to do
  • Do NOT deploy any AI system matching prohibited categories without legal review
  • Do NOT assume third-party tools are automatically compliant with Art. 5
Evidence required

AI inventory with Art. 5 screening results per system

Deadline

2025-02-02

Penalty

€35M / 7% turnover (HIGHEST tier)

Scanner performs static analysis for prohibited practice patterns: import statements for emotion detection SDKs, facial recognition APIs, social scoring libraries. Flags packages matching prohibited use signatures in dependency tree.
Article 5(1)(a)
Prohibited: Subliminal/Manipulative AI Techniques
Bothunacceptable
Description

Verify no AI system uses subliminal, manipulative, or deceptive techniques to distort behavior beyond a person's consciousness, causing significant harm.

What to do
  • For each AI system: assess whether it could manipulate user behavior through deceptive patterns, dark patterns, or subliminal techniques
  • Document assessment rationale
What NOT to do
  • Do NOT use dark patterns, persuasion profiling, or behavioral nudging that bypasses user awareness
  • Do NOT deploy recommendation systems that materially distort behavior causing significant harm
Evidence required

Per-system manipulation risk assessment document

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 5(1)(b)
Prohibited: Exploitation of Vulnerable Groups
Bothunacceptable
Description

Verify no AI system exploits vulnerabilities of specific groups (age, disability, social/economic situation) to distort behavior causing significant harm.

What to do
  • Identify if AI targets vulnerable populations (children, elderly, disabled, economically disadvantaged)
  • Assess exploitation risk
  • Document findings
What NOT to do
  • Do NOT target elderly, disabled, or economically vulnerable users with manipulative AI features
  • Do NOT exploit cognitive limitations of specific user groups
Evidence required

Vulnerability exploitation risk assessment per system

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 5(1)(c)
Prohibited: Social Scoring Systems
Bothunacceptable
Description

Verify no AI system evaluates or classifies persons based on social behavior or personal characteristics leading to detrimental treatment unrelated to the original context.

What to do
  • Check if any AI system scores individuals based on social behavior
  • Verify scores are not used to deny services/rights in unrelated contexts
What NOT to do
  • Do NOT aggregate personal behavior scores across unrelated contexts
  • Do NOT restrict access to services based on AI-scored social behavior
Evidence required

Social scoring screening assessment

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 5(1)(d)
Prohibited: Criminal Risk Profiling
Bothunacceptable
Description

Verify no AI system assesses criminal risk of individuals based solely on profiling or personality traits (without concrete behavioral facts).

What to do
  • Check if any AI predicts criminal behavior from personal traits alone
  • Ensure law enforcement AI uses objective factual indicators, not profiling
What NOT to do
  • Do NOT use AI to predict criminal risk based solely on demographics or personality traits
  • Do NOT profile individuals without objective verifiable fact basis
Evidence required

Criminal profiling screening assessment

Deadline

2025-02-02

Penalty

€35M / 7% turnover

Article 9
Establish Risk Management System
Providerhigh
Description

Continuous risk management system throughout the high-risk AI lifecycle covering identification, evaluation, mitigation, and testing of risks.

What to do
  • Create documented RMS
  • Identify and analyze known/foreseeable risks
  • Adopt mitigation measures
  • Test system
  • Review and update regularly
What NOT to do
  • Do NOT operate high-risk AI without a documented risk management system
  • Do NOT treat risk management as a one-time assessment — it must be continuous
Evidence required

RMS plan, risk register, mitigation log, testing reports

Deadline

2026-08-02

Penalty

€15M / 3% turnover

Scanner checks for RISK-MANAGEMENT.md or risk-assessment.* documents. Verifies structure includes: identified risks, misuse scenarios, mitigation measures, test results. Auto-generates template with required sections.
Article 9(2)
RMS: Identify and Analyze Known Risks
Providerhigh
Description

Identify and analyze risks to health, safety, and fundamental rights that are known or reasonably foreseeable when the system is used as intended.

What to do
  • Systematic risk identification workshop
  • Document each risk with likelihood and severity
  • Consider risks to different user groups including vulnerable persons
What NOT to do
  • Do NOT ignore known risks documented by provider or reported by users
  • Do NOT omit foreseeable misuse scenarios from risk analysis
Evidence required

Risk register with identified risks, likelihood, severity, affected groups

Deadline

2026-08-02

Article 9(2)(b)
RMS: Evaluate Risks from Misuse
Providerhigh
Description

Estimate and evaluate risks not only from intended use but also from reasonably foreseeable misuse of the high-risk AI system.

What to do
  • Brainstorm foreseeable misuse scenarios
  • Assess risks from each misuse scenario
  • Document evaluation and residual risk
What NOT to do
  • Do NOT assume users will only use the system as intended
  • Do NOT skip misuse scenario testing
Evidence required

Misuse risk assessment document, residual risk acceptance rationale

Deadline

2026-08-02

Article 9(6)-(8)
RMS: Test System Before Market Placement
Providerhigh
Description

Test the high-risk AI system to identify appropriate risk management measures. Testing must be against defined metrics prior to market placement.

What to do
  • Define test plan with metrics
  • Execute tests including real-world conditions where appropriate (Art. 60)
  • Document test results against acceptance criteria
  • Test prior to market AND throughout lifecycle
What NOT to do
  • Do NOT place high-risk AI on market without testing against defined metrics
  • Do NOT use production data for testing without proper safeguards
Evidence required

Test plan, test results, acceptance criteria, test logs signed by responsible person

Deadline

2026-08-02

Article 10
Ensure Training Data Quality and Governance
Providerhigh
Description

Training, validation, and testing datasets must meet quality criteria: relevant, representative, free of errors, complete. Bias detection required.

What to do
  • Implement data governance practices
  • Document data sources
  • Assess for bias
  • Address special category data under GDPR
What NOT to do
  • Do NOT train AI on biased, incomplete, or unrepresentative datasets
  • Do NOT skip data quality assessment before training
Evidence required

Data governance policy, data quality reports, bias analysis, GDPR documentation

Deadline

2026-08-02

Article 10(2)(f)
Data Governance: Bias Detection and Mitigation
Providerhigh
Description

Examine training data specifically for possible biases that could lead to discrimination, especially regarding protected characteristics.

What to do
  • Run statistical bias analysis on training data
  • Test for representation gaps across gender, age, ethnicity, disability
  • Implement bias mitigation (resampling, reweighting, debiasing)
  • Document findings and actions
What NOT to do
  • Do NOT deploy AI systems without bias testing across protected characteristics
  • Do NOT ignore disparate impact in model outputs
Evidence required

Bias analysis report, mitigation actions log, before/after fairness metrics

Deadline

2026-08-02

Article 10(2)(a)-(e)
Data Governance: Document Data Sources and Processing
Providerhigh
Description

Document all data collection, labeling, storage, and processing choices. Include data source descriptions and representativeness rationale.

What to do
  • Create data sheet / data card for each training dataset
  • Document collection methodology, labeling process, storage, and preprocessing
  • Assess and document representativeness
What NOT to do
  • Do NOT use training data without documenting provenance and processing steps
  • Do NOT omit data source limitations from documentation
Evidence required

Data sheets/data cards, data processing records, representativeness assessment

Deadline

2026-08-02

Article 4
AI Literacy: Maintain Training Records
Bothall
Description

Keep documented records of who was trained, when, on what topics, and their assessment results. Records must be available for auditors.

What to do
  • Create training register with: employee name, role, training date, topics, score
  • Store records securely for audit
  • Update when new staff join or roles change
What NOT to do
  • Do NOT destroy or fail to maintain training completion records
  • Do NOT accept unverified self-attestation as training evidence
Evidence required

Training register (spreadsheet or system), individual completion records

Deadline

2025-02-02

Penalty

€15M / 3% turnover

Penalty Structure

Three tiers of enforcement

Whichever is higher — fixed amount or percentage of global annual turnover.

Tier 3 — Other Obligations
€7.5M
or 1% of turnover
Supplying incorrect, incomplete, or misleading information to notified bodies or national authorities.
Art. 50-51 violations. Lower caps for SMEs (Art. 99(5)).
Tier 2 — High-Risk Violations
€15M
or 3% of turnover
Non-compliance with requirements for high-risk AI systems, GPAI obligations, or transparency rules.
Most common penalty tier for deployers.
Tier 1 — Prohibited Practices
€35M
or 7% of turnover
Deploying prohibited AI: social scoring, exploitation of vulnerabilities, unauthorized real-time biometric identification.
Same magnitude as GDPR maximum penalties.
Key Definitions

Terms that matter

AI SystemArt. 3(1)
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and infers how to generate outputs such as predictions, content, recommendations, or decisions.
ProviderArt. 3(3)
A natural or legal person that develops an AI system or GPAI model, or has one developed, and places it on the market under its own name or trademark.
DeployerArt. 3(4)
A natural or legal person that uses an AI system under its authority, except for personal non-professional activity.
High Risk AI SystemArt. 6
AI system intended as a safety component of a product covered by Annex I legislation, or falling under Annex III categories (biometrics, critical infrastructure, etc.).
General Purpose AIArt. 3(63)
An AI model trained with large data using self-supervision at scale, capable of performing a wide range of distinct tasks.
Serious IncidentArt. 3(49)
An incident that directly or indirectly leads to death, serious damage to health, serious disruption of critical infrastructure, or serious harm to the environment.
Placing On The MarketArt. 3(9)
The first making available of an AI system on the Union market, whether for payment or free of charge.
Post Market MonitoringArt. 72
All activities carried out by providers to proactively collect and review experience gained from the use of high-risk AI systems.

Know your obligations. Now act on them.

Run a free compliance check on your AI tools in 30 seconds.

Scan your project →
$ npx complior
Copied!