GDPR enforcement: how EU regulators are shaping AI governance

March 2026  |  SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY

Financier Worldwide Magazine

March 2026 Issue


The rapid deployment of artificial intelligence (AI) systems across the European Union (EU) has brought data protection law to the forefront of AI governance. Because many AI systems rely on large-scale processing of personal data, the General Data Protection Regulation (GDPR) has, in practice, operated as the EU’s first effective and horizontally applicable enforcement framework for AI, well before the EU Artificial Intelligence Act (AI Act).

More than seven years after the GDPR entered into force, national data protection authorities (DPAs) are applying data protection rules concretely to AI-related practices. Across the EU, DPAs have initiated investigations, adopted corrective measures and, in some cases, imposed significant fines regarding a wide range of AI-enabled technologies such as biometric identification, facial recognition, automated decision making and profiling, and the training and deployment of AI models. This activity reflects the growing role of DPAs in safeguarding the fundamental rights of individuals.

At the same time, enforcement remains uneven across the EU. Differences in resources, technical expertise and national priorities have produced varying levels of scrutiny and intervention, contributing to a fragmented application of the GDPR in AI-related contexts. The growing technical complexity of AI systems has further tested the limits of a technology‑neutral framework designed for diverse processing operations.

Alongside enforcement, EU and national authorities rely on guidance, opinions and other soft-law instruments to clarify how core GDPR principles (such as lawfulness, transparency, data minimisation and accountability) apply to AI. These efforts help shape compliance practices and provide a degree of legal certainty for organisations.

This environment is evolving amid a broader policy debate on European competitiveness and regulatory burden. The Draghi report on EU competitiveness and the European Commission’s (EC’s) proposed “Omnibus” simplification agenda reflect a renewed focus on streamlining obligations relating to data and AI, while preserving a high level of protection for fundamental rights.

How EU data protection authorities are enforcing the GDPR in AI cases

DPAs have built experience addressing AI-related issues through guidelines, best practices and concrete enforcement actions at national and international levels. Multiple actions across the EU show the GDPR operating as a de facto enforcement framework for AI.

More specifically, the European Data Protection Board (EDPB) recognised in a statement (3/2024 on data protection authorities’ role in the Artificial intelligence Act Framework) that DPAs should have a prominent role in AI-related enforcement.

The EDPB encouraged member states to designate DPAs as market surveillance authorities for certain high‑risk AI systems, emphasising their experience in fundamental‑rights‑based supervision. This positions DPAs as natural interlocutors for AI systems involving personal data, reinforcing continuity between GDPR supervision and future AI Act oversight.

AI‑enabled biometric technologies have faced stringent enforcement from DPAs. Facial recognition systems built on large‑scale scraping of images from public websites and social media have been found to infringe core GDPR principles, including lack of a valid legal basis, unlawful processing of special categories of data and failure to ensure data subject rights.

In a prominent case against Clearview AI, the Dutch authority imposed a €30.5m fine for illegally collecting photos of individuals’ faces to provide a facial recognition service to its clients (in particular to intelligence and investigative services). This position was echoed by many other DPAs, including in France and Greece, which also ordered the deletion of the illegally collected biometric data.

Automated decision making (ADM) and profiling is another central enforcement area. DPAs have sanctioned organisations targeting individuals for commercial purposes without a valid legal basis or adequate transparency. For example, in 2022 the Spanish DPA fined a financial institution €3m for unlawful processing and insufficient information in relation to commercial profiling. In 2025, the Hamburg (Germany) DPA fined a financial services provider nearly €500,000 for automated rejection of credit card applications based solely on algorithms, without human oversight or adequate explanations under article 22 of the GDPR.

Enforcement has also focused on AI-driven systems affecting minors. The Italian DPA imposed a €5m fine on an AI developer for several non‑compliances in relation to its virtual companion, including failure to implement adequate age‑verification mechanisms despite claims that minors were excluded.

Despite growing activity, GDPR enforcement in AI-related contexts remains uneven across the EU. Some DPAs pursue proactive strategies combining investigations, formal decisions and public guidance, while others intervene more selectively, often based on complaints. Divergence is visible in the number of AI-related decisions and in the use of ex officio investigations and preventive tools such as data protection impact assessments. It is important to note that the Italian DPA is one of the most active regulators in AI-related enforcement.

Guidance from EU authorities on developing and using AI systems

At the EU level, the EDPB has clarified in an opinion (28/2024 on data protection aspects related to AI models) that AI models and automated decision‑making systems remain subject to the full set of GDPR principles. As such, controllers should assess risks across the AI lifecycle, from collection and training to deployment and downstream use. In addition, complexity or opacity does not diminish the duty to explain processing in a meaningful way.

Identifying an appropriate legal basis for AI-related processing and for model training in particular has become a central issue. The EDPB indicated that reliance on legitimate interest is not excluded, but that it requires strict necessity and a balancing test, supported by documented safeguards and an assessment of whether less intrusive means exist.

This position was also upheld by the French DPA’s recommendations and the Confederation of European Data Protection Organisations AI Working Group Guidance (Generative AI: The Data Protection Implications, 2023). This approach is now complemented by the EC’s Digital Omnibus proposal, which would explicitly recognise legitimate interest as a potentially appropriate legal basis for certain AI-related processing activities.

EU‑level guidance has addressed claims that AI models are ‘anonymous’ once trained. The EDPB takes a cautious view, noting that “AI models trained on personal data cannot, in all cases, be considered anonymous” and that anonymisation must account for realistic risks of re-identification, memorisation or extraction of personal data from models. Where such risks cannot be excluded, the processing remains subject to the GDPR.

At national level, several DPAs have issued practical guidance for developers and deployers of AI systems, completing the EU guidance developed by the EDPB. The French DPA published numerous recommendations on integrating GDPR requirements into AI projects, focusing on governance, documentation, transparency and the exercise of data subject rights.

Similarly, the Spanish DPA issued early guidance on GDPR compliance for processing operations embedding AI, addressing legal bases, ADM and the need for data protection impact assessments.

From GDPR enforcement to the AI Act: a limited but relevant continuity

The AI Act does not displace the role already played by DPAs in supervising AI-driven systems involving personal data – it even specifically assigns oversight obligations to DPAs for certain types of AI applications.

Prior to the AI Act, GDPR enforcement provided the main operational framework for scrutinising ADM, profiling and biometric processing. This experience offers a relevant, but not exhaustive, reference point for AI Act implementation, particularly where high‑risk AI systems intersect with individual rights.

While certain AI systems may fall simultaneously within the scope of both the GDPR and the AI Act, EU legislators avoided creating a unified enforcement regime. Instead, the AI Act preserves a decentralised model in which DPAs may play a role for specific high‑risk uses, alongside other national authorities. This limited overlap complements, rather than absorbs, existing data protection supervision.

Competitiveness, enforcement and regulatory simplification: the Draghi Report and the Digital Omnibus Agenda

GDPR enforcement activity is embedded in a broader policy debate on European competitiveness and regulatory burden. The issue is not the legitimacy of GDPR enforcement, but the cumulative effect of overlapping obligations on operators developing and deploying AI at scale.

The Draghi Report identifies regulatory complexity and compliance costs as structural factors in Europe’s innovation and productivity gap. Rather than calling for deregulation, it advocates for a more proportionate and predictable implementation of EU rules, warning that fragmented supervision, duplicative procedures and unclear timelines can discourage investment, particularly in fast‑moving sectors such as AI.

Against this background, the EC launched a ‘Digital Omnibus’ initiative to recalibrate implementation of EU digital legislation, including the AI Act and, indirectly, aspects of data protection enforcement. Unlike earlier simplification initiatives that primarily affected adjacent frameworks, the Digital Omnibus proposal would amend the GDPR, with changes particularly relevant for AI-driven processing.

The EC proposes targeted adjustments to core GDPR concepts and procedures, including clarifications to the notion of personal data in light of recent case law and narrowly framed exceptions for processing special categories of data in the context of AI systems. These amendments recognise that the existing GDPR enforcement framework, while the primary supervisory baseline for AI, requires adaptation to remain operational and proportionate.

In parallel, the EC introduced a ‘Digital Omnibus on AI’ aimed at recalibrating implementation of the AI Act itself. It responds to early enforcement and compliance challenges, including delays in designating national authorities, the absence of harmonised standards for high‑risk systems, and resulting legal and operational uncertainty.

To address these issues, the proposal would adjust the application timeline for high‑risk obligations, extend simplified documentation and compliance regimes to mid-sized companies, allow greater flexibility in post‑market monitoring, as well as expand access to sandboxes and real‑world testing.

 

Ahmed Baladi is a partner at Gibson Dunn. He can be contacted on +33 (1) 56 43 13 00 or by email: abaladi@gibsondunn.com.

© Financier Worldwide


©2001-2026 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.