Regulating AI and enforcing privacy laws through landmark cases and regulatory practice
March 2026 | SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY
Financier Worldwide Magazine
Artificial intelligence (AI) has become deeply embedded in routine decision making across both public and private sectors. From recommendation engines and credit assessments to biometric identification and generative services, AI systems now process personal data at an unprecedented scale and level of complexity.
This evolution has placed sustained pressure on traditional legal oversight mechanisms, which were not originally designed to govern opaque, adaptive and data-intensive technologies.
Rather than waiting for wholly new AI-specific statutes, regulators and courts have increasingly relied on existing privacy and data protection frameworks to impose accountability on AI deployments. In parallel, regulatory authorities have issued targeted guidance to address practical governance gaps exposed by algorithmic systems.
The regulatory response, therefore, reflects not the displacement of privacy law but its reinterpretation. Through enforcement actions, judicial reasoning and administrative practice, foundational principles such as lawfulness, fairness, transparency and security are being recalibrated for algorithmic contexts.
This article examines how AI regulation is being shaped in practice through enforcement and regulatory guidance, with particular attention to developments in the European Union (EU), the US and China.
Privacy law as a primary tool for AI accountability
Across jurisdictions, regulators have consistently emphasised that AI systems remain fully subject to existing privacy and data protection obligations. Neither technical complexity nor automation reduces legal responsibility.
On the contrary, AI-driven processing is frequently treated as inherently high risk due to its scale, opacity and capacity to affect individual rights in systematic and far-reaching ways.
In enforcement practice, AI-related violations are increasingly framed as governance failures rather than isolated technical defects. Common findings include the absence of a valid legal basis for processing, insufficient transparency toward affected individuals, weak internal accountability mechanisms and inadequate security safeguards.
This governance-oriented framing has enabled regulators to take meaningful enforcement action even where AI-specific legislation remains incomplete or continues to evolve.
Enforcement and case law: defining the boundaries of acceptable AI use
Automated decision making and profiling. Regulatory enforcement in the EU has clarified that large-scale profiling and automated decision making are subject to strict legal constraints. Authorities have challenged practices such as behavioural targeting, algorithmic recommendation and dynamic pricing, where consent was improperly obtained or where individuals were not provided with meaningful information about how decisions affecting them were made.
A recurring theme in these cases is the rejection of so-called ‘black box’ defences. Organisations are expected to maintain a working understanding of how their AI systems function in practice, at least to the extent necessary to assess legality, proportionality and risk. In several enforcement actions, the inability to explain system behaviour has been treated as a substantive compliance failure rather than a mere documentation shortcoming.
Bias, fairness and discriminatory outcomes. Courts and regulators have also focused on the downstream effects of automated systems, particularly where algorithmic decisions result in biased or discriminatory outcomes. In employment, finance and insurance contexts, authorities have emphasised that delegating decision making to algorithms does not shift responsibility away from the organisations that deploy them.
Although privacy law does not always explicitly prohibit discrimination, enforcement practice increasingly recognises that unjustified or systematically biased automated outcomes undermine core data protection principles, including fairness and proportionality. This trend has strengthened the link between privacy compliance and broader accountability for the social and economic effects of AI-driven decision making.
Biometric and surveillance technologies. Facial recognition and biometric identification technologies have attracted heightened regulatory scrutiny due to their intrusiveness and potential for systemic abuse. Regulators have consistently classified biometric data as highly sensitive and subjected its processing to strict necessity and proportionality assessments.
Importantly, enforcement actions have rejected arguments that deployment in public spaces diminishes reasonable expectations of privacy. Instead, mass biometric surveillance has been treated as inherently high risk, often requiring explicit legal authorisation, prior impact assessments, and robust technical and organisational safeguards.
Regulatory guidance: translating law into practice
EU. In the EU, the General Data Protection Regulation continues to function as the central enforcement instrument for AI-related processing. Guidance from national supervisory authorities has increasingly focused on operational issues such as data minimisation during model training, the explainability of automated decisions and the mandatory use of data protection impact assessments for high-risk processing activities.
The AI Act builds on this enforcement experience rather than replacing it. Its risk-based structure reflects lessons learned under the General Data Protection Regulation (GDPR) – formalising expectations around documentation, testing, human oversight and post-market monitoring for systems that pose material risks to individuals or society.
US. In the US, AI regulation has developed primarily through enforcement rather than comprehensive legislation. The Federal Trade Commission has repeatedly stated that existing consumer protection and privacy laws apply fully to AI systems. Enforcement actions have targeted misleading representations about AI capabilities, failures to secure training data and opaque automated practices that result in consumer harm.
This approach relies less on prescriptive technical standards and more on post hoc accountability. While it creates a degree of regulatory uncertainty, it places particular emphasis on internal governance, accurate disclosures and defensible risk management practices.
China. China’s approach to AI and privacy regulation differs markedly from that of Western jurisdictions. Rather than relying primarily on litigation or judicial interpretation, China has adopted a detailed administrative framework that integrates privacy protection, cyber security and algorithm governance.
The Personal Information Protection Law establishes clear obligations for personal data processing, including specific provisions governing automated decision making. These provisions emphasise transparency and fairness and, in certain circumstances, grant individuals the ability to opt out of automated decisions that have a significant impact on their rights and interests. The Data Security Law and Cybersecurity Law further extends regulatory oversight to data classification, cross-border transfers and critical information infrastructure protection, all of which directly affect AI development and deployment.
China’s regulation of algorithms represents a deliberate regulatory choice. Providers of algorithmic recommendation services are required to register certain systems with regulators and conduct internal risk assessments, effectively treating algorithms as regulated objects rather than purely internal technical tools.
Interim measures on generative AI services extend this logic to model training and output governance. Providers must ensure that training data is lawfully sourced, personal information is adequately protected, and technical safeguards are in place to prevent misuse or data leakage. Privacy compliance is explicitly linked to security and content governance, reflecting a preventive and administrative regulatory philosophy.
Enforcement practice in China typically emphasises corrective measures rather than punitive sanctions. Regulators frequently order remediation, suspend non-compliant services or publicly name offending organisations. Although judicial precedent plays a limited role, these administrative actions exert significant influence on industry behaviour and function as practical regulatory guidance.
Common regulatory signals across jurisdictions
Despite differences in legal systems and enforcement mechanisms, several consistent regulatory expectations have emerged: (i) AI systems that materially affect individuals are treated as high-risk processing activities; (ii) organisations remain accountable for automated decisions, regardless of technical complexity; (iii) transparency and explainability are operational requirements rather than abstract principles; (iv) weak cyber security controls increasingly trigger privacy enforcement; and (v) risk and impact assessments are becoming baseline obligations rather than best practices.
Implications for cyber security and privacy professionals
For practitioners, AI governance is no longer a forward-looking concern but an immediate operational requirement. Effective programmes increasingly involve embedding privacy and security controls throughout the AI development lifecycle: conducting structured assessments of algorithmic risk and impact, securing training data, models and inference interfaces, and maintaining documentation capable of withstanding regulatory scrutiny.
Organisations operating across jurisdictions must also recognise that compliance approaches are not automatically transferable. Controls designed to satisfy GDPR requirements may be insufficient under China’s administrative regime, while US enforcement risks often arise from misleading practices rather than formal statutory noncompliance.
Conclusion
AI regulation is evolving through enforcement and regulatory practice rather than abstract theory. Courts and regulators are increasingly confident in applying existing privacy and data protection principles to automated systems, often in response to concrete and demonstrable harms.
The emerging regulatory landscape is characterised not by hesitation but by consolidation. AI does not exist outside the law. Organisations that treat privacy, security and accountability as foundational elements of AI deployment – rather than as post hoc compliance exercises – will be best positioned to operate sustainably in an increasingly regulated global environment.
Great Gu is China chief information security officer at Haleon. He can be contacted by email: great.x.gu@haleon.com.
© Financier Worldwide
BY
Great Gu
Haleon
Q&A: Data centre cyber resilience
How AI powers cyber crime – and protects against it
Evolving ransomware tactics with AI-enhanced attacks and ransomware as a service
Breaking down NIS2: the five main requirements of the updated NIS Directive
Regulating AI and enforcing privacy laws through landmark cases and regulatory practice
US state privacy landscape complicates global privacy compliance
GDPR enforcement: how EU regulators are shaping AI governance
Peru’s new data protection officer: obligations and practical issues