Where the line sits
Credit-scoring AI captured by Annex III §5(b) includes any system that evaluates a natural person's creditworthiness or establishes a credit score — for consumer lending, SME lending, BNPL, and most tenant-screening contexts. The §5(b) carve-out for financial-fraud detection is narrow: pure fraud-pattern matching is out, but a unified risk-and-fraud model that affects the credit outcome is in.
Insurance §5(c) captures life and health pricing and risk assessment. P&C lines (motor, home, travel) are not in §5(c), but motor telematics that influences pricing and indirectly assesses individual driver risk has triggered debate. The conservative reading of §5(c) covers most behavioural pricing models for life and health.
Provider obligations
- Article 9 risk management with explicit attention to discrimination risk across protected characteristics.
- Article 10 data governance — proxy-discrimination testing on geography, postcode, name origin, and other protected proxies; training data documentation; statistical bias mitigation.
- Article 11 + Annex IV technical documentation — including the conformity assessment, the testing methodology, the performance metrics across population subgroups.
- Article 13 instructions for use written for credit officers and underwriters, not data scientists. The instructions must state when the model can be used and the residual risks the deployer must manage.
- Article 14 oversight measures designed in: a clear path for the deployer to overturn the model output and act on additional information.
- Internal Annex VI conformity assessment is the default route.
Deployer obligations (the bank or insurer)
- Article 26(1): use within the provider's intended purpose.
- Article 26(2): assign oversight to credit-decisioning staff with the authority to override.
- Article 26(11): notify the provider and the market surveillance authority of any serious incident under Article 73.
- Article 27 fundamental rights impact assessment — explicitly required for private deployers in §5(b) and §5(c). The FRIA covers: the intended purpose, the categories of natural persons affected, the foreseeable impact on fundamental rights, the risk of bias, the human-oversight design, and the mitigations.
- GDPR Article 22 individual review on adverse decisions — and an Article 13/14 notice describing the logic.
- For consumer credit, layered disclosures under the Consumer Credit Directive 2 (Directive (EU) 2023/2225) — including the right to a human review under CCD2 Article 14.
DORA overlay
For financial entities under DORA scope, an AI-based credit or fraud system supplied by a third party is an ICT service. DORA Article 28 ICT third-party risk management applies, with all the contractual minima of Article 30. In practice:
- The DORA register of ICT third-party arrangements must list the AI provider and its sub-providers.
- The DORA exit strategy must address how the bank would replace the AI provider and migrate the model — including access to training data, model artifacts, and audit logs.
- If the AI provider becomes a critical ICT third-party provider (CTPP) under Article 31, the European Supervisory Authorities can directly oversee it.
- The AI Act Article 13 deployer instructions feed the DORA contractual due diligence.
Enforcement landscape
National financial supervisors are increasingly the AI Act market surveillance authority for the sector — for example BaFin in Germany, ACPR/AMF in France, DNB in the Netherlands. The European Banking Authority and EIOPA coordinate with the European AI Office. Existing GDPR enforcement against credit decisioning (Schufa CJEU C-634/21) tells you where the case-law is heading: opaque scoring without meaningful human review is exposed.