§ AI Act · Article 27

Fundamental Rights Impact Assessment for High-Risk AI Systems

Regulation (EU) 2024/1689 · Article 27 · Source
Plain-language summary
Deployers of high-risk AI systems that are public bodies or private entities providing public services must conduct a Fundamental Rights Impact Assessment before deploying such systems. The assessment requires describing the intended use, affected individuals, potential risks, human oversight measures, and mitigation steps. It must be performed prior to first use and updated as needed. The results must be reported to the market surveillance authority, unless an exemption applies. The AI Office will provide a template to standardize this process.
Who it applies to
deployers of high-risk AI systems that are bodies governed by public law, private entities providing public services, deployers of high-risk AI systems referred to in points 5(b) and 5(c) of Annex III
Compliance deadline
Phased application of Regulation (EU) 2024/1689 — most provisions apply from 2 August 2026.
§ What Fontvera found

Documents that cite Article 27

ai_office EU Fetched 2026-04
ai_office EU Fetched 2026-04
§ Cross-references

Related articles

Need a cross-border briefing on Article 27?
Search Fontvera ↵ Run the AI Act diagnostic
AI Act enforcement
97 days
until 2026-08-02, when most AI Act provisions begin to apply.