What happened on 28 April
The Digital Omnibus was the Commission and Council's vehicle to staircase AI Act high-risk enforcement: Annex III systems (employment, education, biometrics, critical infrastructure, essential services) would slip to December 2027; Annex I systems (regulated products — medical devices, machinery, automotive, toys) would slip to August 2028.
Trilogue broke down on the Annex I architecture. Member States wanted the staircase tied to existing sectoral conformity assessment cycles; Parliament negotiators argued that would re-open notified-body designations across MDR, IVDR, and the Machinery Regulation. With no compromise text, the file was kicked to the next round on 13 May 2026.
The legal effect of the failure is simple: the original 2 August 2026 date in Article 113(b) and the Article 99 penalty regime continue to apply. No grace period exists in the published text.
Why this matters — by the numbers Fontvera tracks
The Omnibus delay was widely treated as inevitable. It wasn't. Here is the obligation surface that just stayed in force, measured against Fontvera's structured corpus:
- 743 AI Act obligations mapped across 42 sectors.
- 3,247 obligations tracked across the nine EU horizontal regulations the AI Act collides with.
- 41 cross-regulatory references involve the AI Act: 22 overlaps, 15 gaps, 4 hard conflicts.
- AI Act ↔ GDPR has the densest interaction: 4 overlaps, 2 conflicts, 2 gaps (8 collisions in total).
- AI Act ↔ NIS2: 5 collisions. AI Act ↔ DORA: 5. AI Act ↔ DMA: 6. AI Act ↔ DSA: 5. AI Act ↔ ePrivacy: 4.
- Underlying corpus: 312,758 current regulatory documents from 130 sources across 96 jurisdictions; 33,602 documents have full structured extraction.
None of that surface area shrinks because the trilogue stalled. It just becomes enforceable on 2 August.
The four hard AI Act conflicts that did not get resolved
These are the conflicts a delay would have given Member States and the AI Office time to fix. They now arrive on 2 August in their current form. Each was extracted from primary text by Fontvera and is in the corpus today:
| AI Act | Other regulation | Severity | What collides |
|---|---|---|---|
| Art 10 | GDPR Art 17 | High | AI Act allows processing of special category data for bias detection where strictly necessary; GDPR right to erasure can require its deletion once retention is no longer justified. |
| Art 19 | ePrivacy Art 6 | High | AI Act requires providers to retain automatically generated logs for at least six months; ePrivacy requires traffic data to be erased or anonymised once no longer needed for transmission. |
| Art 18 | DMA Art 5 | Medium | AI Act mandates 10-year retention of technical documentation and logs; DMA forces gatekeepers to give real-time data and algorithm access on request. |
| Art 19 | GDPR Art 5 | Medium | AI Act six-month log retention vs GDPR storage limitation principle requiring data is kept no longer than necessary. |
What's at stake — the five highest-penalty obligations
From Fontvera's 743 mapped AI Act obligations, sorted by Article 99 penalty tier:
- Article 5 — Prohibited AI practices. Up to €35,000,000 or 7% of worldwide annual turnover, whichever is higher. Covers eight prohibited categories including subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing, untargeted facial-image scraping, emotion inference at workplace and school, biometric categorisation by sensitive attributes, and real-time remote biometric identification in public spaces by law enforcement.
- Article 16 — Provider obligations for high-risk AI. Up to €15,000,000 or 3%. Covers conformity assessment, registration in the EU database, post-market monitoring, technical documentation, transparency to deployers and the Article 9 risk management system.
- Article 26 — Deployer obligations for high-risk AI. Up to €15,000,000 or 3%. Use the system per provider instructions, ensure human oversight, monitor operation, retain logs, and conduct fundamental rights impact assessments where Article 27 applies.
- Article 50 — Transparency obligations. Up to €15,000,000 or 3%. Disclose AI interaction to natural persons; label generative AI output as artificially generated; mark deepfakes; inform users of emotion recognition and biometric categorisation.
- Article 9 — Risk management system. Enforced through Article 16 at €15,000,000 or 3%. A continuous, iterative process across the entire lifecycle: hazard identification, residual-risk evaluation, mitigation testing, and updates from post-market monitoring.
Authorised representatives, importers, distributors and notified bodies sit on the same €15M / 3% tier under Articles 22, 23, 24 and 31/33 respectively. Supplying incorrect, incomplete or misleading information to authorities sits one rung lower at €7,500,000 or 1%.
What companies should actually do in the next 94 days
The work doesn't change because the delay failed. It just doesn't get optional any more. In order:
- Classify your systems against Article 5 and Annex III. Run Fontvera's free AI Act high-risk diagnostic to get a defensible classification with the specific articles that apply.
- Lock the four hard conflicts above into legal review now. They will not be resolved by 2 August. Decisions on log retention, special-category processing and deepfake disclosure should be made and documented before, not during, an enforcement inquiry.
- Pull your obligation list against the 743 we have mapped. Fontvera links every obligation to its source article, the obligated entity and the penalty tier. Search the corpus for your sector and entity role at the homepage.
- If you are a deployer relying on a non-EU provider: Article 22 means you may inherit authorised representative obligations. This is one of the four "high severity" gaps in the cross-regulatory register and is not waiting for the trilogue.
- Treat the 13 May trilogue as informational, not strategic. Even a successful second attempt has to clear plenary and Council before any deadline shifts. Plan against the unchanged 2 August date.
Why this page is harder to copy than it looks
The narrative of "Omnibus failed, deadline holds" is on every newsletter. The numbers above are not. They come from 312,758 current documents, 33,602 with full structured extraction, 743 AI Act obligations, 219 cross-regulatory references and 41 AI-Act-specific collisions sitting in Fontvera's production database. We track them because the corpus is built to answer cross-border regulatory questions in seconds, and we expose them here because the Omnibus failure is exactly the moment the surface becomes operational risk.
If your team is mapping AI Act exposure for 2 August, the obligation register, conflict descriptions and penalty tiers above are the inputs you need — and they are what the homepage search returns.
Run your free AI Act compliance diagnostic
Five minutes. No login. Returns your classification (Prohibited, High-Risk Annex III, High-Risk Annex I, Limited-Risk or Minimal-Risk) and the specific articles that apply.
Search 316,000+ regulatory documents
The corpus that produced these numbers is searchable from the homepage. Cross-border, cross-regulation, cross-jurisdiction.