The nine sections of Annex IV
- General description of the AI system. Intended purpose, name and version, persons who developed it, hardware/software requirements, instructions for use.
- Detailed description of design and development. Methods used, design choices, computational resources, training and validation procedures, key design decisions.
- Information about the data. Provenance, scope, characteristics, quality controls, governance, cleaning and labelling procedures, bias-mitigation measures.
- Detailed description of monitoring, functioning, and control of the AI system. Capabilities and limitations, expected performance levels, foreseeable unintended outcomes, human oversight measures, technical measures for output interpretation.
- Description of risk management measures and the risk-management system per Article 9. Risk register, residual risk, acceptability rationale.
- Detailed description of relevant changes made through the lifecycle. Version control of training data, models, validation; significant-modification analysis under Article 43(4).
- List of harmonised standards applied in full or in part, references published in the OJEU; or, where harmonised standards are not applied, a description of solutions adopted to meet the requirements.
- Copy of the EU declaration of conformity per Article 47.
- Detailed description of the system in place to evaluate the AI system performance in the post-market phase per Article 72.
How to structure the file in practice
The simplest workable structure mirrors Annex IV section-by-section. Each section is a living document maintained in the company's documentation system (Confluence, Notion, an IRP, a Git-based docs repo). At each significant release:
- Each section gets a version stamp and change log.
- The risk register (Section 5) is updated to reflect any new risks or mitigations.
- The data section (Section 3) is updated for any change in training data or governance.
- Section 6 (changes through lifecycle) gets the new version's significant-modification analysis.
What auditors actually inspect
Across the three notified bodies that have published preliminary AI Act audit guidance, the focus areas are:
- Section 3 (data). Whether the bias-testing methodology actually ran on the data described, with documented results across protected characteristics and edge cases.
- Section 4 (functioning). Whether the stated limitations are realistic — auditors test the system against documented edge cases.
- Section 5 (risk). Whether residual risks and mitigations form a coherent chain — every Article 9 risk traceable to a mitigation traceable to an Article 10/14/15 implementation.
- Section 6 (changes). Whether significant modifications between audits would have triggered re-assessment under Article 43(4).
- Section 9 (post-market). Whether the post-market plan is operating, with evidence of input/output flowing.
Trade secrets and IP
Article 78 protects trade secrets in the documentation that authorities access. Providers can mark proprietary methodology, model weights, and architectural details as confidential. The technical-documentation requirement does not require disclosure of training-data sources to the public; the Annex IV file is shared with the relevant authority on request, not published. Article 13 instructions for use are deployer-facing and may be published; Annex IV §3 details remain inside the provider's compliance file.
SME and start-up simplification
Article 11(3) directs the Commission to adopt an implementing act setting up a simplified Annex IV form for SMEs and start-ups. As of April 2026 that implementing act is in preparation and not yet published; SMEs draft against full Annex IV until the simplified form lands. The substantive obligations do not change — only the structure of presentation.