108 obligations. Every deadline. Every penalty.
The EU AI Act decoded into actionable data — filterable, searchable, and machine-readable. Free and open.
What's in force, what's coming next
108 requirements, filterable
Click any row to expand full details, evidence requirements, and automation approach.
Description
Use system per instructions, assign human oversight, ensure input data quality, monitor operations, keep logs 6+ months.
What to do
- Implement provider instructions
- Assign named human oversight persons
- Verify input data quality
- Active monitoring
- Log retention min 6 months
What NOT to do
- Do NOT use high-risk AI contrary to provider's instructions
- Do NOT skip monitoring of AI system outputs and performance
Evidence required
Implementation evidence, oversight assignments, monitoring logs, log retention records
Deadline
2026-08-02
Description
Provide information, documentation, and access to AI systems upon request from competent authorities.
What to do
- Designate regulatory contact person
- Maintain accessible compliance documentation
- Respond promptly and completely to requests
What NOT to do
- Do NOT obstruct regulatory authority inspections or information requests
- Do NOT destroy evidence relevant to compliance investigations
Evidence required
Designated contact details, documentation access procedures
Deadline
2026-08-02
Description
Verify no AI system creates or expands facial recognition databases through untargeted scraping from internet or CCTV.
What to do
- Check if any AI component scrapes facial images
- Verify face databases are not built from untargeted web/CCTV scraping
What NOT to do
- Do NOT scrape facial images from the internet or CCTV without targeted lawful basis
- Do NOT build or expand facial recognition databases through mass collection
Evidence required
Facial data sourcing audit
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
High-risk AI must automatically record events (logs) for traceability: periods of use, input references, outputs, human interventions.
What to do
- Design logging from architecture phase
- Log: timestamps, inputs, outputs, human oversight actions, errors
- Integrity protection on logs
- Provide deployer guidance on log access
What NOT to do
- Do NOT deploy high-risk AI without automatic event logging enabled
- Do NOT log only errors — log all events specified by provider
Evidence required
Architecture docs showing logging, sample logs, retention policy, integrity mechanism
Deadline
2026-08-02
Description
Verify no AI system infers emotions of persons in workplace or educational settings (except for medical or safety reasons).
What to do
- Identify any emotion recognition AI
- Verify it is NOT used in workplace or education context
- If medical/safety exception claimed, document justification
What NOT to do
- Do NOT use emotion recognition AI in workplace or educational settings
- Do NOT infer employee mood, stress, or engagement via facial/voice analysis (exception: medical/safety)
Evidence required
Emotion recognition context audit
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Verify no AI system categorizes persons based on biometric data to infer sensitive characteristics (race, political opinions, religion, sexual orientation).
What to do
- Check if any AI uses biometric data to infer race, religion, political views, or sexual orientation
- Document absence of such functionality
What NOT to do
- Do NOT use biometric data to categorize individuals by race, religion, political opinion, or sexual orientation
- Do NOT infer sensitive attributes from biometric inputs
Evidence required
Biometric categorization screening
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Every company that uses or builds AI must train their staff so they understand AI risks and responsible use. Training must be proportionate to role and risk level.
What to do
- Conduct skills gap assessment
- Develop role-based AI literacy training
- Document training completion
- Annual refresh cycle
What NOT to do
- Do NOT allow staff to use AI systems without any training
- Do NOT treat AI literacy as a one-time event — it requires annual refresh
- Do NOT apply same training level to all roles — tailor to responsibility
Evidence required
Training records, curriculum, policy document, completion certificates
Deadline
2025-02-02
Penalty
€15M / 3% turnover
Description
Screen all AI systems against Article 5 prohibited practices. Eight categories of banned AI uses.
What to do
- Audit all AI systems against Art. 5 list
- Document screening results
- Establish pre-deployment screening process
What NOT to do
- Do NOT deploy any AI system matching prohibited categories without legal review
- Do NOT assume third-party tools are automatically compliant with Art. 5
Evidence required
AI inventory with Art. 5 screening results per system
Deadline
2025-02-02
Penalty
€35M / 7% turnover (HIGHEST tier)
Description
Verify no AI system uses subliminal, manipulative, or deceptive techniques to distort behavior beyond a person's consciousness, causing significant harm.
What to do
- For each AI system: assess whether it could manipulate user behavior through deceptive patterns, dark patterns, or subliminal techniques
- Document assessment rationale
What NOT to do
- Do NOT use dark patterns, persuasion profiling, or behavioral nudging that bypasses user awareness
- Do NOT deploy recommendation systems that materially distort behavior causing significant harm
Evidence required
Per-system manipulation risk assessment document
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Verify no AI system exploits vulnerabilities of specific groups (age, disability, social/economic situation) to distort behavior causing significant harm.
What to do
- Identify if AI targets vulnerable populations (children, elderly, disabled, economically disadvantaged)
- Assess exploitation risk
- Document findings
What NOT to do
- Do NOT target elderly, disabled, or economically vulnerable users with manipulative AI features
- Do NOT exploit cognitive limitations of specific user groups
Evidence required
Vulnerability exploitation risk assessment per system
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Verify no AI system evaluates or classifies persons based on social behavior or personal characteristics leading to detrimental treatment unrelated to the original context.
What to do
- Check if any AI system scores individuals based on social behavior
- Verify scores are not used to deny services/rights in unrelated contexts
What NOT to do
- Do NOT aggregate personal behavior scores across unrelated contexts
- Do NOT restrict access to services based on AI-scored social behavior
Evidence required
Social scoring screening assessment
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Verify no AI system assesses criminal risk of individuals based solely on profiling or personality traits (without concrete behavioral facts).
What to do
- Check if any AI predicts criminal behavior from personal traits alone
- Ensure law enforcement AI uses objective factual indicators, not profiling
What NOT to do
- Do NOT use AI to predict criminal risk based solely on demographics or personality traits
- Do NOT profile individuals without objective verifiable fact basis
Evidence required
Criminal profiling screening assessment
Deadline
2025-02-02
Penalty
€35M / 7% turnover
Description
Continuous risk management system throughout the high-risk AI lifecycle covering identification, evaluation, mitigation, and testing of risks.
What to do
- Create documented RMS
- Identify and analyze known/foreseeable risks
- Adopt mitigation measures
- Test system
- Review and update regularly
What NOT to do
- Do NOT operate high-risk AI without a documented risk management system
- Do NOT treat risk management as a one-time assessment — it must be continuous
Evidence required
RMS plan, risk register, mitigation log, testing reports
Deadline
2026-08-02
Penalty
€15M / 3% turnover
Description
Identify and analyze risks to health, safety, and fundamental rights that are known or reasonably foreseeable when the system is used as intended.
What to do
- Systematic risk identification workshop
- Document each risk with likelihood and severity
- Consider risks to different user groups including vulnerable persons
What NOT to do
- Do NOT ignore known risks documented by provider or reported by users
- Do NOT omit foreseeable misuse scenarios from risk analysis
Evidence required
Risk register with identified risks, likelihood, severity, affected groups
Deadline
2026-08-02
Description
Estimate and evaluate risks not only from intended use but also from reasonably foreseeable misuse of the high-risk AI system.
What to do
- Brainstorm foreseeable misuse scenarios
- Assess risks from each misuse scenario
- Document evaluation and residual risk
What NOT to do
- Do NOT assume users will only use the system as intended
- Do NOT skip misuse scenario testing
Evidence required
Misuse risk assessment document, residual risk acceptance rationale
Deadline
2026-08-02
Description
Test the high-risk AI system to identify appropriate risk management measures. Testing must be against defined metrics prior to market placement.
What to do
- Define test plan with metrics
- Execute tests including real-world conditions where appropriate (Art. 60)
- Document test results against acceptance criteria
- Test prior to market AND throughout lifecycle
What NOT to do
- Do NOT place high-risk AI on market without testing against defined metrics
- Do NOT use production data for testing without proper safeguards
Evidence required
Test plan, test results, acceptance criteria, test logs signed by responsible person
Deadline
2026-08-02
Description
Training, validation, and testing datasets must meet quality criteria: relevant, representative, free of errors, complete. Bias detection required.
What to do
- Implement data governance practices
- Document data sources
- Assess for bias
- Address special category data under GDPR
What NOT to do
- Do NOT train AI on biased, incomplete, or unrepresentative datasets
- Do NOT skip data quality assessment before training
Evidence required
Data governance policy, data quality reports, bias analysis, GDPR documentation
Deadline
2026-08-02
Description
Examine training data specifically for possible biases that could lead to discrimination, especially regarding protected characteristics.
What to do
- Run statistical bias analysis on training data
- Test for representation gaps across gender, age, ethnicity, disability
- Implement bias mitigation (resampling, reweighting, debiasing)
- Document findings and actions
What NOT to do
- Do NOT deploy AI systems without bias testing across protected characteristics
- Do NOT ignore disparate impact in model outputs
Evidence required
Bias analysis report, mitigation actions log, before/after fairness metrics
Deadline
2026-08-02
Description
Document all data collection, labeling, storage, and processing choices. Include data source descriptions and representativeness rationale.
What to do
- Create data sheet / data card for each training dataset
- Document collection methodology, labeling process, storage, and preprocessing
- Assess and document representativeness
What NOT to do
- Do NOT use training data without documenting provenance and processing steps
- Do NOT omit data source limitations from documentation
Evidence required
Data sheets/data cards, data processing records, representativeness assessment
Deadline
2026-08-02
Description
Keep documented records of who was trained, when, on what topics, and their assessment results. Records must be available for auditors.
What to do
- Create training register with: employee name, role, training date, topics, score
- Store records securely for audit
- Update when new staff join or roles change
What NOT to do
- Do NOT destroy or fail to maintain training completion records
- Do NOT accept unverified self-attestation as training evidence
Evidence required
Training register (spreadsheet or system), individual completion records
Deadline
2025-02-02
Penalty
€15M / 3% turnover
Three tiers of enforcement
Whichever is higher — fixed amount or percentage of global annual turnover.
Terms that matter
Know your obligations. Now act on them.
Run a free compliance check on your AI tools in 30 seconds.
$ npx complior