AI Act FAQ

High-Risk AI Definition: Frequently Asked Questions

5 questions answered with specific EU AI Act article references. 98 days until the August 2, 2026 enforcement deadline.

Not sure if your AI system is affected? Take the 5-minute diagnostic.

What makes an AI system high-risk?

Two paths: (1) The AI is a safety component of a product covered by EU harmonisation legislation listed in Annex I (Article 6(1)), or (2) the AI falls into one of the use categories listed in Annex III (Article 6(2)).

What are the Annex III categories?

Eight categories: biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration/border control, and justice/democratic processes.

Does every AI in healthcare count as high-risk?

Not automatically. AI systems that are medical devices under the MDR/IVDR are high-risk via Annex I. General health informatics tools without clinical decision-making may be lower risk.

Is my chatbot high-risk?

Generally no. Chatbots typically fall under limited risk (Article 50 transparency obligations). However, if a chatbot makes or influences decisions about credit, insurance, employment, or healthcare, it may be high-risk.

Who decides the classification?

The provider self-classifies based on Article 6 criteria. Market surveillance authorities can challenge the classification during inspection. For borderline cases, guidance from the European AI Office applies.

Related intelligence

Get the complete AI Act FAQ as a PDF

All questions and answers in one document. Free.

No spam. Unsubscribe anytime.

Pro tier launching June 2026. Browse all briefings