Intelligence Briefing: Fundamental Rights Impact Assessment Under the EU AI Act
1. What the Regulation Requires and Who It Applies To The EU AI Act (Regulation (EU) 2024/1689) mandates a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems, as outlined in Articles 8–15 and further detailed in the Commission’s guidance. High-risk AI systems—those posing significant risks to health, safety, or fundamental rights—must undergo this assessment before deployment. Public sector entities (e.g., municipalities, regions) are explicitly targeted under the Act’s provisions, as highlighted in the Commission’s guidance on AI in the public sector ([ai_office] AI Act: AI i den offentlige sektor — krav til kommuner og regioner).
Key requirements include:
- Risk identification and mitigation (Art. 9): Operators must assess how the AI system may infringe on fundamental rights, such as non-discrimination (Art. 10), data protection (Art. 11), and transparency (Art. 13).
- Documentation and transparency (Art. 11): High-risk AI systems must include technical documentation demonstrating compliance with fundamental rights safeguards.
- Post-market monitoring (Art. 21): Continuous assessment of the system’s impact on rights, with corrective measures if risks materialize.
2. Enforcement Precedents As of the briefing’s date, no FRIA-specific enforcement cases under the EU AI Act have been recorded. However, precedent from GDPR enforcement—which shares overlapping fundamental rights protections—suggests a rigorous approach to rights-based assessments. For example:
- CNIL (France) imposed a sanction (SAN-2023-076) for automated decision-making lacking transparency, aligning with AI Act’s transparency requirements ([gdprhub|FR] CNIL (France) - SAN-2023-076).
- AKI (Estonia) fined a company (2.1.-5/24/2203-8) for opaque data processing, reinforcing the need for clear documentation of rights impacts ([gdprhub|EE] AKI (Estonia) - 2.1.-5/24/2203-8).
- AP (Netherlands) and Garante (Italy) have similarly emphasized proportionality and necessity in automated systems, principles mirrored in the AI Act’s FRIA obligations ([gdprhub|NL] AP (The Netherlands) - Decision of 18 December 2023; [gdprhub|IT] Garante per la protezione dei dati personali (Italy) - 10077129).
3. Practical Compliance Steps To ensure FRIA compliance, organizations should:
- Map high-risk AI systems to identify those subject to FRIA (e.g., biometric identification, critical infrastructure management). The Commission’s public sector guidance ([ai_office] AI Act: AI i den offentlige sektor — krav til kommuner og regioner) provides sector-specific examples.
- Conduct a rights-impact analysis using the risk management framework (Art. 9), documenting potential infringements on:
- Implement mitigation measures (Art. 10–15), such as:
- Maintain a FRIA report (Art. 11) for NCAs upon request, detailing risks, mitigations, and monitoring plans.
- Train staff on FRIA requirements, particularly in public sector roles where AI adoption is accelerating ([ai_office] AI Act: AI i den offentlige sektor — krav til kommuner og regioner).
4. Cross-Border Differences While the AI Act is directly applicable across the EU, national authorities may interpret fundamental rights risks differently, leading to variations in enforcement:
- France and Italy have historically taken a strict stance on automated decision-making (e.g., CNIL’s sanctions), suggesting rigorous FRIA scrutiny for public sector AI ([gdprhub|FR] CNIL (France) - SAN-2023-076; [gdprhub|IT] Garante per la protezione dei dati personali (Italy) - 10077129).
- **Estonia and