# Intelligence Briefing: High-Risk AI Systems Under the EU AI Act – Complete Examples List
1. What the Regulation Requires and Who It Applies To
The EU AI Act (Regulation (EU) 2024/1689) classifies AI systems into risk categories, with high-risk systems subject to stringent obligations under Articles 8–15. These requirements apply to providers, deployers, importers, and distributors of AI systems within the EU, as well as those placed on the EU market or used in the EU, regardless of where the provider is based.Key Obligations for High-Risk AI Systems (Articles 8–15)
- Risk Management System (Article 9): Continuous assessment of risks throughout the AI system’s lifecycle.
- Data Governance (Article 10): High-quality training, validation, and testing datasets, ensuring relevance, representativeness, and absence of biases.
- Technical Documentation (Article 11): Comprehensive records demonstrating compliance, including design, development, and testing phases.
- Transparency & User Information (Article 13): Clear instructions, disclosures, and human oversight mechanisms.
- Accuracy, Robustness, and Cybersecurity (Article 15): Measures to ensure resilience against vulnerabilities and attacks.
Who Must Comply?
- Providers (developers of AI systems) must ensure compliance before market placement.
- Deployers (users of AI systems) must follow operational requirements, including human oversight.
- Importers & Distributors must verify compliance before making systems available.
- Biometric identification and categorization (e.g., remote biometric identification in law enforcement).
- Critical infrastructure management (e.g., AI in energy, transport).
- Education and vocational training (e.g., AI for student assessment).
- Employment, worker management, and access to self-employment (e.g., AI in hiring, performance evaluation).
- Access to essential private and public services (e.g., credit scoring, social benefits allocation).
- Law enforcement (e.g., predictive policing, crime forecasting).
- Migration, asylum, and border control management (e.g., AI in visa processing).
- Administration of justice and democratic processes (e.g., AI in judicial decision-support).
- Social scoring systems.
- Exploitative AI (e.g., manipulative techniques).
- Real-time remote biometric identification in public spaces (with limited exceptions).
2. Enforcement Precedents
As of the compliance deadline (August 2025 for high-risk systems), no EU AI Act enforcement cases have been recorded in the provided sources. However, GDPR enforcement actions offer indirect precedents for AI-related penalties, particularly in biometric data processing:- Netherlands (AP – Boete vingerafdrukken personeel, 2019): A €900,000 fine for unlawful fingerprint-based employee attendance systems, citing GDPR violations (biometric data as special category data).
- Belgium (APD/GBA – 114/2024): A €45,000 fine for unlawful processing of biometric data in a workplace context, upheld under GDPR.
- Lithuania (VDAI – ETid-732, 2023): A €20,000 fine for improper biometric data handling in a fitness company.
Expected EU AI Act Enforcement Timeline:
- February 2025: Prohibited AI practices take effect.
- August 2025: High-risk AI compliance deadlines begin.
- 2026–2027: National authorities (e.g., national DPAs, AI boards) will likely issue first penalties, with fines up to €35M or 7% of global turnover (whichever is higher).
3. Practical Compliance Steps for High-Risk AI Systems
Providers and deployers should implement the following measures to ensure compliance:- Conduct a Risk Assessment (Article 9):
- Ensure Data Governance (Article 10):
- Develop Technical Documentation (Article 11):
- Implement Transparency & Human Oversight (Article 13):
- **Strengthen Cybersecurity & Robustness (Article