AI Act FAQ

AI Transparency: Frequently Asked Questions

5 questions answered with specific EU AI Act article references. 98 days until the August 2, 2026 enforcement deadline.

Not sure if your AI system is affected? Take the 5-minute diagnostic.

What are transparency obligations?

Under Article 50, deployers must inform people when they interact with an AI system (chatbots, voice assistants), when content is AI-generated (deepfakes), and when emotion recognition or biometric categorisation is used.

Do all AI systems need transparency?

Not all. Prohibited and high-risk systems have specific disclosure requirements. Limited-risk systems have transparency obligations under Article 50. Minimal-risk systems have no mandatory transparency.

What must chatbot deployers disclose?

That the user is interacting with an AI system, unless this is obvious from the circumstances (Article 50(1)). The disclosure must be clear, timely, and in a format the user can understand.

What about AI-generated content?

Providers of AI that generates synthetic audio, image, video, or text must ensure outputs are machine-readable as AI-generated (Article 50(2)). Deployers must label deepfakes as AI-generated (Article 50(4)).

Are there exceptions to transparency?

Yes. For law enforcement and national security purposes, transparency obligations may be deferred. AI-generated content in creative or satirical works has modified disclosure rules (Article 50(4)).

Related intelligence

Get the complete AI Act FAQ as a PDF

All questions and answers in one document. Free.

No spam. Unsubscribe anytime.

Pro tier launching June 2026. Browse all briefings