G

Gemini Pro EU AI Act Compliance Profile

Google · Visit website

Risk Classification
GPAI SYSTEMIC
Art. 51-55
GPAI with Systemic Risk
Model Info
Provider Info
ProviderGoogle
Categoryapi platform
Visit website
Obligations
4apply
~20h effort
Ensure AI Literacy of Staff
Label Deep Fakes and AI-Generated Content for Public
Provide Explanation of AI Decisions to Affected Persons
Cooperate with Regulatory Authorities
$ npx complior scan

Your risk depends on how you use Gemini Pro

Usage ContextRisk LevelObligations
Internal coding toolMINIMAL3 obligations (~12h)
Customer support botLIMITED7 obligations (~32h)
HR screening / hiringHIGH19 obligations (~120h)
Credit decisionsHIGH19 obligations (~120h)
Medical triageHIGH19 obligations (~120h)

Why this tool is classified as GPAI SYSTEMIC

Balanced performance Google model

Applicable Articles

Article 4Ensure AI Literacy of Staff
REQUIREDDEADLINE PASSED
Obligation under Article 4 for Gemini Pro deployers.
Article 50(4)Label Deep Fakes and AI-Generated Content for Public
REQUIREDAUG 2026
Article 26(11) / Article 86Provide Explanation of AI Decisions to Affected Persons
REQUIREDAUG 2026
Article 26(10) / Article 21Cooperate with Regulatory Authorities
REQUIREDAUG 2026
Article 50(1)Disclose AI Interaction to Users — Chatbot/Assistant
PROVIDER: Google
Article 50(2)Mark AI-Generated Content — Machine-Readable
PROVIDER: Google
Article 53(1)(a)-(b) / Annex XI / Annex XIIGPAI: Technical Documentation per Annex XI
PROVIDER: Google
Article 53(1)(b) / Annex XIIGPAI: Downstream Provider Information (Annex XII)
PROVIDER: Google
Article 53(1)(c)GPAI: Copyright Compliance Policy
PROVIDER: Google
Article 53(1)(d)GPAI: Publish Training Data Summary
PROVIDER: Google
Article 55GPAI Systemic Risk: Model Evaluation and Adversarial Testing
PROVIDER: Google

Who does what

Google (provider)Their job

  • Ensure AI Literacy of Staff (Article 4)
  • Disclose AI Interaction to Users — Chatbot/Assistant (Article 50(1))
  • Mark AI-Generated Content — Machine-Readable (Article 50(2))
  • Mark AI-Generated Images — C2PA/Watermark (Article 50(2))
  • GPAI: Technical Documentation per Annex XI (Article 53(1)(a)-(b) / Annex XI / Annex XII)

You (deployer)Your job

  • Ensure AI Literacy of Staff (Article 4)
  • Label Deep Fakes and AI-Generated Content for Public (Article 50(4))
  • Provide Explanation of AI Decisions to Affected Persons (Article 26(11) / Article 86)
  • Cooperate with Regulatory Authorities (Article 26(10) / Article 21)
See full obligation checklist

Risk Assessment Reasoning

Gemini Pro classified as GPAI model with systemic risk under Article 51-56 EU AI Act. Model exceeds 10M MAU threshold. Additional obligations apply including adversarial testing and incident reporting.

Similar Models

Frequently Asked Questions

What is Gemini Pro's EU AI Act risk classification?

+

Gemini Pro is classified as GPAI SYSTEMIC under the EU AI Act. However, the risk level of your specific deployment depends on your use case: internal tools may be Minimal risk, while HR screening or credit decisions escalate to High Risk.

What are my obligations if I deploy Gemini Pro?

+

As a Gemini Pro deployer, you have 4 base obligations (~20 hours estimated effort). Key articles: Article 4, Article 50(4), Article 26(11) / Article 86, Article 26(10) / Article 21.

What is Gemini Pro?

+

Gemini Pro is a Unknown model. It has 0 downloads on HuggingFace.

What are the EU AI Act deadlines for Gemini Pro?

+

Already passed: Ensure AI Literacy of Staff — 2025-02-02. Upcoming: Label Deep Fakes and AI-Generated Content for Public — 2026-08-02. Upcoming: Provide Explanation of AI Decisions to Affected Persons — 2026-08-02. Upcoming: Cooperate with Regulatory Authorities — 2026-08-02.

Check Gemini Pro compliance in your codebase

One command to scan. Open-source CLI.

$ npx complior scan