D

Deepfake-audio-detection EU AI Act Compliance Profile

mo-thecreator

Audio Classificationapache-2.0transformers
Risk Classification
HIGH RISK
Art. 6-15, 26-29
High-Risk AI System
Model Info
PipelineAudio Classification
Librarytransformers
Licenseapache-2.0
Base Modelmo-thecreator/wav2vec2-base-finetuned
SyncedApr 3, 2026
Popularity
4.8Kdownloads
15 likes
View on HuggingFace
Obligations
7apply
~96h effort
AI Literacy (Art. 4)
AI Disclosure (Art. 50)
Human Oversight (Art. 26)
Data Governance (Art. 10)
$ npx complior scan

Your risk depends on how you use Deepfake-audio-detection

Usage ContextRisk LevelObligations
Internal coding toolMINIMAL3 obligations (~12h)
Customer support botLIMITED7 obligations (~32h)
HR screening / hiringHIGH19 obligations (~120h)
Credit decisionsHIGH19 obligations (~120h)
Medical triageHIGH19 obligations (~120h)

Why this tool is classified as HIGH RISK

Deepfake-audio-detection is a audio classification model by mo-thecreator. Fine-tuned from mo-thecreator/wav2vec2-base-finetuned. Built with transformers. Licensed under apache-2.0. 4.8K downloads on HuggingFace.

Applicable Articles

Art. 4AI Literacy (Art. 4)
REQUIREDDEADLINE PASSED
Obligation under Art. 4 for Deepfake-audio-detection deployers.
Art. 50AI Disclosure (Art. 50)
REQUIREDDEADLINE PASSED
Art. 26Human Oversight (Art. 26)
REQUIREDAUG 2027
Art. 10Data Governance (Art. 10)
REQUIREDAUG 2027
Art. 27FRIA (Art. 27)
REQUIREDAUG 2027
Art. 9Risk Management (Art. 9)
REQUIREDAUG 2027

Who does what

mo-thecreator (provider)Their job

  • Provider obligations being compiled

You (deployer)Your job

  • AI Literacy (Art. 4) (Art. 4)
  • AI Disclosure (Art. 50) (Art. 50)
  • Human Oversight (Art. 26) (Art. 26)
  • Data Governance (Art. 10) (Art. 10)
  • Robustness Monitoring (Art. 26) (Art. 26)
See full obligation checklist

Risk Assessment Reasoning

The Deepfake-audio-detection tool is designed to identify manipulated audio content, which is critical for preventing misinformation and protecting individuals' rights. Given the potential misuse of deepfake technology, it falls under high-risk AI systems that require stringent compliance measures.

More models by mo-thecreator

Similar Audio Classification models

Browse all Audio Classification models

Frequently Asked Questions

What is Deepfake-audio-detection's EU AI Act risk classification?

+

Deepfake-audio-detection is classified as HIGH RISK under the EU AI Act. This means 7 mandatory obligations including conformity assessment, FRIA, and human oversight requirements.

What are my obligations if I deploy Deepfake-audio-detection?

+

As a Deepfake-audio-detection deployer, you have 7 base obligations (~96 hours estimated effort). Key articles: Art. 4, Art. 50, Art. 26, Art. 10.

What is Deepfake-audio-detection?

+

Deepfake-audio-detection is a Audio Classification model by mo-thecreator. It has 4.8K downloads on HuggingFace. Licensed under apache-2.0.

What are the EU AI Act deadlines for Deepfake-audio-detection?

+

Already passed: AI Literacy (Art. 4) — 2025-02-02. Already passed: AI Disclosure (Art. 50) — 2025-08-02. Upcoming: Human Oversight (Art. 26) — 2027-08-02. Upcoming: Data Governance (Art. 10) — 2027-08-02. Upcoming: Robustness Monitoring (Art. 26) — 2027-08-02. Upcoming: FRIA (Art. 27) — 2027-08-02. Upcoming: Risk Management (Art. 9) — 2027-08-02.

Check Deepfake-audio-detection compliance in your codebase

One command to scan. Open-source CLI.

$ npx complior scan