Two paths: (1) The AI is a safety component of a product covered by EU harmonisation legislation listed in Annex I (Article 6(1)), or (2) the AI falls into one of the use categories listed in Annex III (Article 6(2)).
5 questions answered with specific EU AI Act article references. 98 days until the August 2, 2026 enforcement deadline.
Not sure if your AI system is affected? Take the 5-minute diagnostic.Two paths: (1) The AI is a safety component of a product covered by EU harmonisation legislation listed in Annex I (Article 6(1)), or (2) the AI falls into one of the use categories listed in Annex III (Article 6(2)).
Eight categories: biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration/border control, and justice/democratic processes.
Not automatically. AI systems that are medical devices under the MDR/IVDR are high-risk via Annex I. General health informatics tools without clinical decision-making may be lower risk.
Generally no. Chatbots typically fall under limited risk (Article 50 transparency obligations). However, if a chatbot makes or influences decisions about credit, insurance, employment, or healthcare, it may be high-risk.
The provider self-classifies based on Article 6 criteria. Market surveillance authorities can challenge the classification during inspection. For borderline cases, guidance from the European AI Office applies.
All questions and answers in one document. Free.
No spam. Unsubscribe anytime.