The Article 5 prohibitions
Article 5 lists eight categories of prohibited AI practices, in force since 2 February 2025. Five of them apply directly to law-enforcement contexts:
- Article 5(1)(d). AI for risk assessment of natural persons to predict the likelihood of committing a criminal offence, based solely on profiling or assessing personality traits and characteristics. Predictive-policing tools that score individuals based purely on demographic profiling are prohibited.
- Article 5(1)(e). Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.
- Article 5(1)(f). Emotion recognition in the workplace and educational institutions — except for medical or safety reasons.
- Article 5(1)(g). Biometric categorisation systems that categorise natural persons based on biometric data to deduce race, political opinions, trade-union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Article 5(1)(h). Real-time remote biometric identification in publicly accessible spaces for law-enforcement purposes — except under the narrow Article 5(2)–(7) derogations.
The real-time RBI derogations (Articles 5(2)–(7))
Real-time RBI is permitted only when strictly necessary for one of three purposes:
- Targeted search for specific victims of abduction, trafficking, or sexual exploitation, or search for missing persons.
- Prevention of a specific, substantial, and imminent threat to life or physical safety, or a genuine, present, or foreseeable threat of a terrorist attack.
- Localisation or identification of a person suspected of a criminal offence in Annex II, punishable by at least four years of detention.
Each deployment requires prior authorisation by a judicial or independent administrative authority (with limited urgency exceptions), notification of the market surveillance authority and the national DPA, registration in a non-public part of the EU AI Database, and an Annex III §1 fundamental rights impact assessment.
Annex III high-risk law-enforcement AI
- §1 Biometrics. Post-event biometric identification (after the fact, on a recording), biometric categorisation by characteristics not listed in Article 5(1)(g), emotion recognition outside the prohibited contexts.
- §6 Law enforcement. AI used to assess the risk of a person becoming a victim, polygraph-style tools, evidence-reliability assessment AI, profiling AI for criminal investigations under Annex III §6(c)–(e).
- §7 Migration, asylum, border control. Polygraph-style AI used in this context, AI assessing migration risk or examining asylum applications, AI for verifying authenticity of travel documents.
Why §1 needs a notified body
Article 43(1) requires third-party conformity assessment by a notified body for biometric-identification systems (Annex III §1). This is the only Annex III category where internal assessment under Annex VI is not the default — the legislators considered the impact on fundamental rights too high to leave to self-assessment. The provider chooses between:
- Annex VII conformity assessment based on assessment of the quality management system and assessment of the technical documentation; or
- Conformity assessment under Annex VI plus an additional notified-body assessment of the technical documentation.
Deployer obligations specific to law enforcement
- Article 26 deployer baseline applies in full.
- Article 27 fundamental rights impact assessment — compulsory for law-enforcement deployers without the public-body restriction that applies elsewhere.
- Article 26(10): individual decisions adversely affecting natural persons must on request be explained to the person.
- Article 79–82 market surveillance powers — sectoral law-enforcement bodies are subject to AI Office and national authority oversight.
- Law Enforcement Directive (Directive (EU) 2016/680) governs the personal-data processing in parallel.