§ AI Act · DORA · GDPR SECTOR

AI Act for fintech, credit scoring and insurance

Credit and insurance AI is high-risk by default. Article 26 fundamental rights assessments are mandatory for private deployers in this sector.

Summary

Two of the three Annex III §5 categories sit in financial services: creditworthiness (§5(b)) and life-and-health insurance pricing (§5(c)). Both are high-risk by default. Fraud detection is explicitly carved out of the credit category — but only the part that is fraud detection; a fraud model that also influences a credit decision is back in scope.

Financial services is also one of the few sectors where Article 27 fundamental rights impact assessments apply to private deployers, not only public bodies. A bank deploying an AI credit scorecard must complete the FRIA before first use and update it when the system changes materially.

The DORA overlay matters: when the AI is supplied by a third party (most credit-scoring SaaS), the bank's DORA Article 28 ICT-third-party-risk file must cover the AI vendor, and the AI Act provider's Article 13 documentation feeds directly into the DORA exit-strategy and concentration-risk analyses.

Who this applies to
Banks, lenders, fintech platforms, insurers, ICT third-party providers under DORA, credit scoring vendors, national financial supervisors.
Compliance deadline
2 August 2026 — high-risk AI system obligations apply. The Digital Omnibus (Council + Parliament agreed positions, March 2026) may shift this to 2 December 2027 for Annex III systems and 2 August 2028 for Annex I products. Until the amending regulation is published in the Official Journal, plan for 2 August 2026.
§ Key articles

What the law says

Annex III §5(b)
AI used to evaluate creditworthiness or establish credit scores — except detection of financial fraud.
Annex III §5(c)
AI used for risk assessment and pricing in life and health insurance.
Article 27
Fundamental rights impact assessment — explicitly required for private deployers offering credit and life/health insurance.
Article 14
Human oversight — applicants must be able to obtain a meaningful review of an adverse credit or insurance decision.
Article 13
Transparency to deployers — model behaviour, performance, and known limitations across population segments.
DORA Article 28
ICT third-party risk management — applies when an AI-based credit or fraud system is supplied by a third party.
GDPR Article 22
Right not to be subject to solely automated credit/insurance decisions with significant effect.
§ Detail

In depth

Where the line sits

Credit-scoring AI captured by Annex III §5(b) includes any system that evaluates a natural person's creditworthiness or establishes a credit score — for consumer lending, SME lending, BNPL, and most tenant-screening contexts. The §5(b) carve-out for financial-fraud detection is narrow: pure fraud-pattern matching is out, but a unified risk-and-fraud model that affects the credit outcome is in.

Insurance §5(c) captures life and health pricing and risk assessment. P&C lines (motor, home, travel) are not in §5(c), but motor telematics that influences pricing and indirectly assesses individual driver risk has triggered debate. The conservative reading of §5(c) covers most behavioural pricing models for life and health.

Provider obligations

Deployer obligations (the bank or insurer)

DORA overlay

For financial entities under DORA scope, an AI-based credit or fraud system supplied by a third party is an ICT service. DORA Article 28 ICT third-party risk management applies, with all the contractual minima of Article 30. In practice:

Enforcement landscape

National financial supervisors are increasingly the AI Act market surveillance authority for the sector — for example BaFin in Germany, ACPR/AMF in France, DNB in the Netherlands. The European Banking Authority and EIOPA coordinate with the European AI Office. Existing GDPR enforcement against credit decisioning (Schufa CJEU C-634/21) tells you where the case-law is heading: opaque scoring without meaningful human review is exposed.

§ Action items

Practical steps

01
Inventory every model that contributes to a credit or insurance pricing decision and confirm whether it is in Annex III §5(b)/(c) or qualifies for the fraud-only carve-out.
02
Run a fundamental rights impact assessment under Article 27 before the next material model release; document the bias-testing strategy and the human-oversight design.
03
Update the DORA ICT third-party register to include AI providers and align AI Act and DORA contractual minima in one master agreement.
04
Review GDPR Article 22 operating procedures: who reviews adverse decisions, on what timeframe, with what authority to overturn.
05
Brief credit officers and underwriters on Article 14 oversight — the human-in-the-loop is a control, not a formality.
§ What Fontvera found

Documents in our corpus

digitaliseringsstyrelsen DK Fetched 2026-04
§ Cross-references

Related Fontvera intelligence

Need a cross-border briefing on this?
Search Fontvera ↵ Run the AI Act diagnostic
AI Act enforcement
97 days
until 2026-08-02, when most AI Act provisions begin to apply.