Search this site
Embedded Files
AgoraNova
  • About
  • People
  • Analyses
  • Contact
AgoraNova
  • About
  • People
  • Analyses
  • Contact
  • More
    • About
    • People
    • Analyses
    • Contact

Photo by Carl Campbell on Unsplash 

The Future of AI Governance: A Measured Reaction from EU Regulators with the AI Omnibus Proposal

AlAnany, Batoul & Namek, Farid

Published on 02/03/2026

In Brief


  • The European Commission proposed amending the AI Regulation or the AI Act even before it entered into force.

  • The AI Omnibus, proposed by the European Commission, aims to simplify implementation of the EU AI Act but affects core elements of EU digital governance, particularly the intersection between AI regulation and data protection law.

  • In a joint opinion, the European Data Protection Board and the European Data Protection Supervisor support clarification but warn against weakening fundamental rights safeguards or provider accountability.

  • Key legal debates concern high-risk AI registration, documentation obligations, processing of sensitive data, and supervisory coordination.

  • The authorities oppose broad exemptions for high-risk systems and caution against expanding sensitive data processing without strict conditions.

  • The proposal is also politically sensitive due to discussions about postponing certain AI Act obligations beyond 2026.

  • The broader context is geopolitical: the EU’s rights-based regulatory model contrasts with the innovation-driven approach of the United States and the state-centric strategy of China.


Key Takeaways for Investors


  • No major deregulation expected. Core compliance duties for high-risk AI systems are likely to remain intact.

  • High-risk classification and higher cost exposure: registration, transparency, and documentation obligations will materially affect AI firms operating in the EU.

  • Sensitive data remains a legal hotspot with enforcement risk under GDPR continues to be significant.

  • Potential delays create short-term uncertainty but do not signal weaker long-term regulation.

  • Strategic divergence persists as the EU offers regulatory predictability, the US prioritises speed, and China prioritises scale.


The so-called AI Omnibus, introduced by the European Commission as a part of the broader Digital Omnibus initiative, seeks to streamline the implementation of the EU AI Act. Presented as a technical simplification instrument, the proposal nonetheless touches upon core structural elements of European digital regulation, with an emphasis on personal data protection in the context of AI development.

In their joint opinion, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) welcomed clarifications but expressed reservations, stressing that simplification must not weaken EU fundamental rights protections.

A structural issue is at stake: AI systems are increasingly dependent on large-scale data processing, making the boundary between AI development and privacy law legally and operationally porous.

1. Legal: Simplification vs. Regulatory Integrity

The Omnibus responds to the complexity of the AI Act, particularly regarding:

  • multi-tier risk classification,

  • compliance layering,

  • supervisory fragmentation,

  • and technical documentation obligations.

While the simplification effort has been broadly supported, the EDPB and EDPS warn of two systemic risks: the preservation of fundamental rights and the maintenance of provider accountability. Any effort to simplify the regulatory framework must remain firmly anchored in the protection of fundamental rights. In particular, safeguards relating to privacy and data protection, non-discrimination, transparency, and meaningful human oversight in automated decision-making must be preserved. At the same time, the authorities stress that reducing compliance burdens must not erode provider accountability. Core obligations remain essential to ensuring effective oversight and enforcement. The underlying challenge is not merely technical but structural: finding a balance between regulatory usability and the preservation of a coherent, rights-based framework.

1.1. Fostering innovation

The joint opinion welcomes the establishment of regulatory sandboxes intended to allow experimentation with AI systems in a secure framework, while recommending the direct involvement of data protection authorities in supervising the processing of personal data within these environments. It specifies that the determination of the competent authority for European-level sandboxes must be clarified within the AI Act itself, and that the EDPS should be competent for sandboxes dedicated to EU entities (institutions, agencies, etc.). The opinion also welcomes the introduction of simplified procedures to encourage innovation.

The easing of technical documentation obligations for so-called “mid-cap” companies (distinct from small and medium-sized enterprises), mainly innovative firms particularly exposed to international competition, is, however, more controversial. The EDPB and the EDPS rightly point out that the risks posed by high-risk systems do not depend on the size of the company that places them on the market.

1.2. Obligation to register high risk AI

The opinion unequivocally condemns the Commission’s proposal in this respect. The obligation to register must be maintained for all systems listed as high-risk, even where providers consider that their systems do not in fact present a high risk. This obligation ensures a high level of transparency, prevents providers from self-exempting, and avoids diluting their responsibility. According to the opinion, abolishing this obligation would undermine the accountability principle and reduce market transparency, creating a risk of undue exemptions.

1.3. Processing of sensitive data

With regard to the proposal to allow certain special categories of sensitive personal data (such as health data or ethnic origin) to be processed in all AI systems whereas the AI Act currently allows this only for high-risk systems for the purpose of detecting and correcting bias in AI models, the EDPB and the EDPS adopt a cautious stance. Their recommendation remains somewhat vague: they suggest specifying the relevant situations and allowing such processing only where the risk of harmful bias-related effects is sufficiently serious. In any event, they recall the competence of data protection authorities to enforce the strict GDPR rules governing sensitive data.

1.4. Governance and inter-regulation

Large parts of the opinion concern the respective roles of supervisory authorities. It stresses the need to clearly delineate the competences of the AI Office as well as market surveillance authorities in order to avoid any overlap with the independent supervision exercised by the EDPS over AI systems used by EU institutions, bodies, and agencies. It also recommends granting the EDPB a consultative role within the European Artificial Intelligence Board, particularly in view of the role it intends to play in ensuring the consistency of personal data practices within regulatory sandboxes.

1.5. AI Literacy and Institutional Trust: a social aspect of the initiative

The AI debate increasingly recognises that legal compliance alone cannot guarantee the safe deployment of artificial intelligence. Beyond formal rules, societal readiness depends on a broader ecosystem shaped also by public understanding of AI and institutional transparency. In this respect, the governance of AI extends beyond law into the domains of public policy, education, and democratic legitimacy.

Societal adaptation also requires structural adjustments, particularly in relation to workforce transformation and digital culture. As AI systems increasingly permeate economic and administrative processes, individuals and organisations must develop the skills necessary to understand, use, and critically assess these technologies. Recognising this, the framework promoted by the European Commission incorporates provisions aimed at fostering AI literacy, encouraging both public authorities and private actors to support education, awareness, and responsible use. Such measures reflect a broader approach in which regulation is complemented by societal capacity-building.

2. Geopolitical and geoeconomic: a competition between three models

AI governance has quickly become one of the most contested arenas of geopolitical rivalry. As states compete to lead in artificial intelligence, three distinct regulatory philosophies have emerged, each reflecting different assumptions about the role of the state, the market, and individual rights.

2.1. The United States

The United States follows a decentralised and innovation-driven path. Comprehensive federal AI legislation remains limited, with oversight largely sectoral. Governance has evolved primarily through executive action – notably the 2023 Executive Order on Safe, Secure, and Trustworthy AI – and soft-law instruments such as the NIST AI Risk Management Framework.

This model prioritises speed, private sector leadership, and capital mobilisation. The results are visible in rapid innovation cycles and global technological dominance. Yet systemic safeguards remain uneven, and long-term accountability seems fragile. The advantage is agility; the risk is fragmentation.

2.2. China

China represents a state-centric strategic model. AI development is tightly integrated into industrial policy and national security planning, enabling coordinated resource allocation and rapid deployment. Regulation exists – including targeted measures on algorithms and generative AI – but operates within a broader state-led expansion strategy.

This model excels in speed and scale, supported by extensive data access and central coordination. However, limited individual rights protections and governance opacity generate international trust concerns, complicating the global diffusion of Chinese standards.

2.3. The EU

The European Union has embraced what can be described as a regulatory trust model. Centred around the AI Act and the broader EU digital framework, this approach rests on three pillars: protection of fundamental rights, risk-based oversight, and market harmonisation. AI systems are classified according to risk, with stricter obligations imposed on high-risk applications, embedding technological deployment within a constitutional and rights-based structure.

Beyond compliance, the EU seems to seek transforming regulation into a competitive advantage. By promoting transparency, accountability, and human oversight, it aims to position “trustworthy AI” as a market differentiator. Harmonised standards across the single market are intended to provide scale while preserving normative coherence. The central wager is that legal certainty and rights guarantees will enhance, rather than hinder, Europe’s global competitiveness.

Whether this model becomes a global reference point or constrains competitiveness remains uncertain. If trust and regulatory coherence prove economically valuable, Europe may shape global standards. If not, technological leadership – and norm-setting power – may consolidate elsewhere.

3. Strong reservations to postpone the entry into application of the AI Act

The AI Regulation was due to apply from 2 August 2026, with certain provisions (general rules and prohibitions of certain practices) subject to early application. Observing that effective implementation requires technical standards still under development and fearing the emergence of systems whose compliance might later be called into question, the Commission proposed a mechanism linking the Regulation’s application to the adoption of standards and guidelines. Two deadlines were envisaged: 2 December 2027 for high-risk AI systems classified by reference to their field of use, and 2 August 2028 for high-risk AI systems relating to products governed by EU harmonised legislation.

This postponement is viewed critically. The EDPB and the EDPS consider that certain obligations, particularly those relating to transparency, must apply as originally planned. They also call for any delays to be kept to an absolute minimum. They recall that existing AI systems fall outside the scope of the AI Act, meaning that the later its entry into application, the greater the number of systems that will have developed without being subject to EU law requirements.


The article can be downloaded with all references below.

The Future of AI Governance - A Measured Reaction from EU Regulators.pdf
Disclaimer and Copyright
Disclaimer
The content published on the AgoraNova website is provided for informational purposes only. While every effort is made to ensure accuracy and rigor, no guarantee is given in that regard.
AgoraNova shall not be liable for any direct or indirect damage arising from the use of, or reliance upon, the information contained on this website. The analyses, opinions, and conclusions expressed are solely informative and analytical and do not reflect the views of AgoraNova, its editorial board, reviewers, or the Organisations and Institutions with which they are in any way affiliated.
AgoraNova accepts no responsibility for the content of external websites or third-party materials accessible through links provided on this website.
Copyright and Use of Content
Unless otherwise stated, the content published on the AgoraNova website is protected by copyright. Copyright is held by AgoraNova and by the respective authors, in accordance with the journal’s publication and licensing policies.
The content may be used, reproduced, or cited for academic, professional, or informational purposes, provided that the source is clearly and explicitly acknowledged. Any reuse must include an appropriate reference to AgoraNova and to the author(s) concerned.
Copyright in third-party materials appearing on this website remains with the respective rights holders and must be respected.
The name AgoraNova and its logo are legally protected and may not be used, reproduced, or distributed without prior written permission.
Google Sites
Report abuse
Google Sites
Report abuse