Skip to main content
Build trusted data with Ethyca.

Subject to Ethyca’s Privacy Policy, you agree to allow Ethyca to contact you via the email provided for scheduling and marketing purposes.

Regulatory Realism in Brussels: How the EUs Digital Omnibus Rewrites the AI Act & GDPR

On November 19th, the European Commission dropped what might be the most significant digital policy shift since GDPR came into force in 2018. Learn why here.

Authors
Neville Samuell, CTO, Ethyca
Topic
Privacy Practice
Published
Nov 25, 2025
Two professionals sitting in a glass office reviewing paperwork.
Introduction

On November 19th, the European Commission dropped what might be the most significant digital policy shift since GDPR came into force in 2018. The "Digital Omnibus" is a massive legislative package that rewrites the EU's digital rulebook. And it's got everyone from privacy advocates to Big Tech lobbyists spinning.

context setting

What is the Omnibus, and why now?

The Digital Omnibus is the Commission's answer to an uncomfortable reality: Europe has spent the better part of a decade becoming the world's de facto digital regulator (the "Brussels Effect" in action), but at the same time it's struggling to build a conducive environment for competitive tech companies. The Draghi Report on European Competitiveness painted a grim picture: Europe is falling behind the US and China, suffocated by a fragmented sSingle market and regulatory complexity.

So the Commission is threading a very narrow needle: maintain Europe's high standards for fundamental rights while dismantling the barriers that prevent European companies from scaling. They're calling it "simplification," but make no mistake, this is a fundamental recalibration of how the EU thinks about digital regulation.

The package promises €5 billion in annual savings from reduced administrative burden, plus up to €150 billion in efficiency gains from new digital infrastructure (more on that below). But beneath the economic framing lies a profound rewrite of GDPR and the AI Act that's got civil society organizations like noyb and EDRi sounding the alarm.

Breaking it down

The five main changes

The Omnibus isn't just one piece of legislation. It's a complex ecosystem of amendments, strategies, and new regulations. There's a huge amount being proposed here, and what’s presented here is not a comprehensive legal analysis. Instead, here are our five main takeaways from what we’ve read so far.

1. A new AI Act timeline

The AI Act just came into force, and the implementation roadmap is already being rewritten. The Commission realized the technical infrastructure for compliance (harmonized standards) isn't ready, so they're shifting from fixed deadlines to condition-based timelines. Instead of "you must comply by X date," it's now "you must comply 6 months after the Commission confirms standards are available." The compliance clock only starts ticking once the standards bodies finish their work. There are still absolute backstop dates — December 2027 for things like biometrics and employment systems, and August 2028 for safety components in products — but companies won't be forced to comply before the technical standards actually exist.

There's also a new "minor task" exemption that lets companies self-assess their way out of high-risk categories. In short, if your AI is just scheduling interviews rather than ranking candidates, you may avoid the heavy compliance burden. And the European AI Office continues to gain power, moving enforcement for the most powerful models from national DPAs to Brussels.

2. Allowing personal data for AI training

This is where the proposal is most controversial. The Commission attempts to resolve the tension between data protection and the data-hungry nature of generative AI. The proposal explicitly amends GDPR to allow processing personal data for AI training based on "legitimate interest," with an opt-out mechanism. For those not familiar with GDPR, "legitimate interest" means you don't need permission. It's assumed you can process the data unless someone objects. Critics argue this effectively legalizes mass data scraping for commercial AI development.

Even more controversially, there's a proposed "subjective approach" to defining personal data. That means, if a company creates pseudonymized data and doesn't intend to re-identify individuals, it might not be considered "personal data" under GDPR. This shifts from an objective standard (is the person identifiable?) to a subjective one (does the company intend to identify them?). Max Schrems has already said noyb will challenge this in court if it passes.

The Commission is calling it 'simplification,' but make no mistake, this is a fundamental recalibration of how the EU thinks about digital regulation.

Neville Samuell, CTO, Ethyca

3. Deprecation of the ePrivacy Directive (into GDPR)

The current ePrivacy Directive requires consent for most device access operations, creating banners on nearly every website. The Commission argues this causes "consent fatigue" where users just click "Accept All" without reading.

The proposal moves device access rules from the ePrivacy Directive into GDPR. Essentially deprecating the Directive in favour of GDPR amendments, a significant structural change. It creates consent-exempt operations (aggregated audience measurement, security, network management), though critics are concerned the boundaries aren't clear and could create tracking loopholes. It also mandates that websites respect automated browser signals (like a "Reject All" setting), so users can set preferences once. Sites will still need to request consent for non-exempt purposes like behavioural advertising, so banners aren't going away completely.

4. The new European Business Wallet

This is the infrastructure play. Just like the EU Digital Identity Wallet for citizens, the Business Wallet gives legal entities a verified digital identity. Companies can share verified credentials instantly, bid for public contracts across borders without notarizing documents, and use qualified electronic signatures recognized across the EU. The Commission estimates this could save €150 billion annually. And that's where the real economic value is, not just in regulatory simplification.

5. New Single Reporting Portal

Right now, a cyber incident might trigger reporting under GDPR (to DPAs), NIS2 (to CSIRTs), DORA (to financial regulators), and the Cyber Resilience Act. The Omnibus creates a Single Reporting Portal where companies submit one report that gets automatically routed to the relevant authorities. To make this work, they're harmonizing definitions and timelines across all these acts.

A CTO's Take

The good, bad, and the ugly

The Good: Simplification of consent is worth trying

The current ePrivacy Directive rules for consent banners definitely frustrate businesses and consumers alike, so I know I’m not alone in wanting to explore new ways to simplify this. The ePrivacy Directive has become outdated with its focus on "cookies," whereas today JavaScript tracking is much more relevant for user privacy. So this is a step in the right direction.

Additionally, "consent fatigue" is real, and it drives users away and makes obtaining consent for actual processing harder than it should be because everyone involved is annoyed. And automated signals are good. Even if no EU standard exists yet, we can look at adopting or extending previous attempts like GPC (Global Privacy Control) for example, which has been gathering increasing support across the US.

The Bad: Some slippery slopes

The wording around consent-exempt operations is vague enough to be concerning. The ePrivacy Directive has been the protection for users against unfettered tracking online, and the new exemptions for aggregated audience measurement and security purposes could easily be stretched to cover some very broad behavioural tracking if not enforced carefully. We've seen this movie before. What starts as a narrow exception for "aggregated" data becomes a loophole that bad actors drive a truck through.

And watering down the definition of personal data? That's a fundamental problem. The GDPR's strength has always been its high standard. Data is personal if someone can be identified, full stop. Moving to a subjective "intent-based" definition lets companies mark their own homework. Bad actors will claim their datasets are anonymous because they "don't intend" to re-identify people, even when the technical means exist to do so. This is a big red flag. It undermines the entire regulatory framework.

The Ugly: AI training as legitimate interest?

Reading through the Omnibus last week, this one caught me off guard. Creating a special carve-out that AI training qualifies as "legitimate interest" under GDPR is a dramatic departure from how I’ve always been thinking about this in data privacy. The opt-out mechanism they're proposing is functionally useless in a world where data is scraped at scale. How is a user supposed to identify every company that might be training on their data and lodge an objection with each one?

From my point of view, the Commission is trying to solve a real problem (the legal uncertainty around training data) with a solution that just accepts the current status quo as the only viable outcome. It's not just about making the rules clearer, it's about shifting from a long history of consent-based processing to an opt-out model for one of the most data-intensive use cases imaginable. That's a big deal, and I'm surprised they went this far.

The Future in Europe

What comes next

The proposal now moves to the European Parliament, where it's going to be a battle. The center-right (EPP) and liberals (Renew) will likely support the competitiveness aspects, but the Left and Green factions are going to mount a fierce defense of GDPR. The "legitimate interest" for AI training and the "subjective" definition of personal data will be the main battlegrounds that we’ll see play out in the coming months.

After Parliament, it goes to the Council for negotiations with member states. They're expected to be broadly supportive of simplification (European economies are sluggish and they see this as a partial solution to help business expansion), but there might be tension around centralizing powers in the AI Office. My view is, national regulators aren't going to love ceding authority to Brussels.

If it passes, implementation will be staggered. The AI Act delays mean full compliance for high-risk AI won't bite until late 2027 or 2028. The Business Wallet will likely roll out faster, driven by the urgent need for digital infrastructure (and, let's be honest, military mobility, there's a whole security dimension here that's under-reported).

The Bottom Line

What to take away

This is a calculated risk. The Commission is placing a bet that loosening the rules will unleash innovation and close the gap with the US and China. But there's a real danger that in doing so, Europe loses its unique selling point: digital trust. If the "Data Union" becomes indistinguishable from the data-extractive models of its rivals, Europe might find it has traded its soul for economic growth that, in the end, despite the significant sacrifices, will never materialize.

As someone building privacy engineering tools, I'm watching this closely. The simplification of consent is genuinely helpful (and has the possibility of moving things in the right direction in Europe), but the slippery slopes around data definition and the AI training carve-out are fundamentally concerning for privacy. We'll need to see how this plays out in Parliament, but one thing's for sure: the "golden age" of uncompromising European digital regulation is ending, and we're entering a new era of "regulatory realism."

The ePrivacy Directive has been the protection for users against unfettered tracking online, and the new exemptions for aggregated audience measurement and security purposes could easily be stretched to cover some very broad behavioral tracking if not enforced carefully.

Neville Samuell, CTO, Ethyca

Speak with Us

If your firm is still treating governance as a checkpoint at the finish line, now is the moment to rethink. Book an intro with Ethyca to see how embedded governance can transform your AI development into a true competitive advantage.

About Ethyca: Ethyca is the trusted data layer for enterprise AI, providing unified privacy, governance, and AI oversight infrastructure that enables organizations to confidently scale AI initiatives while maintaining compliance across evolving regulatory landscapes.

Share