§ AI Act TOPICAL

Building the Article 9 risk-management system

Article 9 is the spine of high-risk compliance. Lifecycle, hazard identification, mitigation, residual-risk acceptance — and the integration with what you already do.

Summary

Article 9 sets up the spine that holds the rest of the high-risk regime together. It is a continuous, iterative, lifecycle risk-management system — not a one-off pre-market exercise. Articles 10 (data), 11 (documentation), 13 (transparency), 14 (oversight), 15 (robustness), 72 (post-market monitoring), and 73 (incident reporting) all connect into it.

The structure is recognisable from ISO 31000 and (for medical devices) ISO 14971: identify hazards, estimate and evaluate risks, mitigate, accept residual risk, monitor in operation. The AI-specific additions are Article 9(2)(b)'s focus on risks emerging from misuse and Article 9(8)'s explicit consideration of impact on minors.

The most common compliance failure is treating Article 9 as a document deliverable rather than a live process. Authorities will look for evidence the risk register changed during development, that mitigations were implemented, and that the post-market monitoring under Article 72 actually feeds back into the risk register.

Who this applies to
Providers of high-risk AI systems, quality and risk teams, integration with ISO/IEC 42001, ISO 14971, ISO 31000, and sectoral risk frameworks.
Compliance deadline
2 August 2026 — high-risk AI system obligations apply. The Digital Omnibus (Council + Parliament agreed positions, March 2026) may shift this to 2 December 2027 for Annex III systems and 2 August 2028 for Annex I products. Until the amending regulation is published in the Official Journal, plan for 2 August 2026.
§ Key articles

What the law says

Article 9(1)
Establishment of a risk management system as a continuous iterative process across the lifecycle.
Article 9(2)
Steps — identification, estimation, evaluation of risks; risk mitigation.
Article 9(3)
Risks must be reduced as much as possible through design and development.
Article 9(4)
Residual risks must be acceptable, communicated to deployer.
Article 9(5)
Adequate risk-management measures throughout the lifecycle.
Article 9(8)
Specific consideration when the system is likely to be accessed by or have impact on persons under 18.
§ Detail

In depth

The Article 9 lifecycle

Article 9(1) is explicit: the risk-management system is "a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system, requiring regular systematic review and updating." That has four practical implications:

The five-step procedure under Article 9(2)

  1. Identification and analysis of known and reasonably foreseeable risks. Risks to health, safety, fundamental rights — including discrimination. The "reasonably foreseeable" qualifier extends to misuse: if a deployer might foreseeably use the system in a way that produces a risk, that misuse-risk is in scope.
  2. Estimation and evaluation of risks emerging when the system is used in accordance with its intended purpose.
  3. Estimation and evaluation of other possibly arising risks under reasonably foreseeable misuse.
  4. Adoption of appropriate and targeted risk-management measures. Designed to mitigate the risks identified. Article 9(3) requires reduction "as far as possible through adequate design and development."
  5. Evaluation of residual risk. Article 9(4) requires that residual risks are judged acceptable, and communicated to the deployer. The acceptability judgment is documented and signed off by an accountable person.

Where Article 9 intersects with other articles

Integration with existing standards

Article 9(8) — minors

Where the system is likely to be accessed by or impact persons under 18, Article 9(8) requires explicit consideration of the impact on those persons. This is operative in education AI (most of Annex III §3), in some healthcare AI, and in any consumer system reasonably accessible to children. The risk register must include child-specific considerations: developmental impact, reduced capacity to recognise AI, parental notice, age-appropriate transparency.

What "good" looks like

§ Action items

Practical steps

01
Build the Article 9 risk register as a versioned live artifact, not a snapshot document.
02
Map each identified risk to a specific Article 10/14/15 mitigation; document the linkage.
03
Designate an accountable signer for residual-risk acceptability — not a committee.
04
Connect the Article 72 post-market monitoring plan into the Article 9 update cycle as a formal input.
05
Where minors or vulnerable groups are reasonably foreseeable users, document the Article 9(8) and 9(2)(b) misuse analyses explicitly.
§ What Fontvera found

Documents in our corpus

ai_office EU Fetched 2026-04
eiopa EU Fetched 2026-04
Opinion on Artificial Intelligence governance and risk management
eurlex EU Fetched 2026-04
EUR-Lex: 32025R0454 (2025-03-07)
ai_office EU Fetched 2026-04
ai_office EU Fetched 2026-04
§ Cross-references

Related Fontvera intelligence

Need a cross-border briefing on this?
Search Fontvera ↵ Run the AI Act diagnostic
AI Act enforcement
97 days
until 2026-08-02, when most AI Act provisions begin to apply.