What "high-risk" means in recruitment
The AI Act sweeps in almost every workplace AI use case through Annex III §4. The captured uses are:
- Recruitment and selection — including targeted job advertising, parsing applications, evaluating candidates.
- Decisions affecting employment relationships — promotion, termination, performance evaluation.
- Task allocation based on individual behaviour, traits, or characteristics.
- Monitoring and evaluation of workers.
This covers the obvious cases (ATS resume rankers, interview scoring) and several non-obvious ones: workforce-management software that assigns shifts based on predicted no-show risk; productivity dashboards that flag underperformers; sales-coaching AI that scores call recordings.
Vendor (provider) checklist
- Conformity assessment under Annex VI (internal) is the default route — no notified body required for §4 systems.
- Article 10 data governance is the highest-risk area: training data drawn from past hiring decisions inherits past biases. Document the testing strategy and the fairness metrics you measured against.
- Article 11 + Annex IV technical documentation must explain the model, the training data, the evaluation procedure, and the residual risks.
- Article 13 instructions for use must tell HR exactly which decisions the AI is fit to support and which it is not.
- Article 14 design for oversight: never a single "auto-reject" pipeline without a human in the loop.
- Article 49 EU Database registration before placing on the market.
Employer (deployer) checklist
- Use the system within the provider's stated intended purpose (Article 26(1)). A CV screener trained on engineering roles must not be repointed at clinical hiring.
- Assign human oversight to a recruiter with the competence and authority to override AI outputs (Article 26(2)).
- Inform candidates that AI is used in the process (Article 50) and provide GDPR Article 13/14 notice — including the existence of automated decision-making and meaningful information about the logic.
- Inform workers and their representatives before deployment (Article 26(7)). In Germany this means works council notification under §87 BetrVG; in France, CSE consultation; in the Netherlands, OR advice rights.
- Keep automatically generated logs for at least six months (Article 26(6)).
- If a candidate exercises their GDPR Article 22 rights, you must offer human review of any solely automated rejection.
Where AI Act and GDPR Article 22 overlap
The AI Act regulates the system; GDPR Article 22 regulates the decision. A hiring AI can be Article 14 compliant (human oversight is designed in) and still violate Article 22 if, in operation, no meaningful human actually reviews rejections. The CNIL's 2023 guidance on automated decisions in hiring sets the practical bar: the human reviewer must have the competence and authority to overturn the AI, must actually consider the candidate's individual situation, and the review must not be a rubber stamp.
Works council and labour law
Across major Member States, deploying AI for hiring or workforce management triggers existing co-determination obligations:
- Germany: Works council co-determination under §87 BetrVG for any technical system that monitors employees. Confirmed by the BAG (Federal Labour Court) for performance-monitoring software.
- France: CSE must be informed and consulted before deployment under L.2312-8 of the Code du travail.
- Netherlands: Works council advice right under WOR Article 25.
- Denmark/Sweden/Finland: Information and consultation rights under the Nordic co-determination acts.