Compliance with UK data protection law while embracing emerging technologies

March 2024  |  SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY

Financier Worldwide Magazine

March 2024 Issue


In today’s rapidly evolving world, where transformative technologies are taking centre stage, the integration of advancements such as artificial intelligence (AI) into our daily lives opens new opportunities for businesses. However, these advancements also bring challenges, particularly in ensuring strong data privacy compliance. As organisations embrace AI for innovation, decision making and efficiency, navigating data protection compliance becomes increasingly difficult, requiring businesses to carefully strike the balance between enjoying the benefits of using AI and upholding the fundamental principles of individuals’ rights and freedoms, as afforded to them by UK data protection laws.

This article provides an overview of the main challenges surrounding data privacy compliance in the UK when using such technologies, outlining the responsibilities, ethical considerations and regulatory requirements that should be considered when implementing such new technologies. It also suggests practical steps which can be taken to assist with such compliance.

UK data protection laws

In the UK, data protection laws stem from a set of key principles designed to safeguard individuals’ privacy and to ensure the responsible handling of their personal information. These principles are set out in the Data Protection Act 2018 and the UK General Data Protection Regulation (UK GDPR) and include the concepts of lawfulness, fairness and transparency in the processing of personal data.

Additionally, UK data protection law emphasises the need to collect data for specific, explicit purposes and ensuring that data collected is relevant and limited only to what is genuinely needed. Organisations are also required to maintain the accuracy of data and to store it securely, respecting individuals’ rights to access, rectify and erase their personal information, subject to certain limitations. Moreover, these principles stress the importance of accountability, placing responsibility squarely on the shoulders of organisations to demonstrate compliance with data protection regulations and to adopt privacy-by-design practices in the development and deployment of technologies. This means ensuring that data protection is considered at the outset and built into any new technology from the ground up.

The integration of emerging technologies, notably AI, introduces a distinct set of challenges in meeting the UK data protection law principles, including those outlined below.

Privacy by design

Conducting data privacy impact assessments (DPIAs) at the early stages of AI system design, as part of the development process, to identify and mitigate privacy risks, is a key step to be taken to ensure that the privacy by design principle of UK data protection law is met. The UK data protection regulator, the Information Commissioner’s Office (ICO), has recently issued guidance on completing DPIAs when using AI. In particular, the ICO highlights that the DPIA needs to make it clear how and why the organisation is going to use AI to process the data. This requires detail to be included on how data will be collected, stored and used, the volume, variety and sensitivity of the data, the nature of the organisation’s relationship with individuals, and the intended outcomes for individuals or wider society, as well as the organisation collecting the data.

Further, when assessing proportionality, organisations need to weigh up their interests in using AI against the risks it may pose to the rights and freedoms of individuals. For AI systems, the ICO guidance makes it clear that, in particular, organisations need to think about any detriment to individuals that could follow from bias or inaccuracy in the algorithms and data sets being used.

Data governance

Establishing and maintaining dynamic data governance frameworks that can adapt to the evolving nature of AI will be a challenge for businesses. Developing and implementing an AI playbook as part of a comprehensive strategy could be a valuable tool in managing legal risks associated with AI implementation. This involves identifying specific organisational risks, assessing systems for personal data processing, and establishing a risk evaluation process for newly adopted technologies. Additionally, creating an AI policy for employees, combined with training sessions, will help ensure clarity on the permissible use of AI in daily work, emphasising approved practices and guiding staff on data input for AI tools.

Ensuring comprehensive data lifecycle management is implemented when using new emerging technologies will also be key, covering data collection, processing, storage and disposal, while aligning with data protection principles and regulatory requirements.

Data minimisation and purpose limitation

Balancing the need to use extensive data for effective AI training with the principles of data minimisation and purpose limitation involves carefully selecting and processing only the data that is necessary for specific, well-defined purposes. AI tools do have a tendency to ‘scrape’ vast amounts of data and this may therefore be difficult in practice to ensure.

Algorithmic transparency and explainability

Despite the growing use of AI, many operate in ways that are unclear to those providing AI systems, those deploying AI systems, and those affected by the use of AI systems – the systems are so complex that even the providers of these systems are often unable to explain the decisions and outcomes of the systems they have built. To combat this particular challenge, investing in research and development of ‘explainable AI’ techniques to make complex algorithms more understandable to non-experts and fulfil the rights of individuals to know how decisions about them are made is key. Doing so can help ensure compliance with several personal data protection principles, such as transparency, accountability and fairness.

Additionally, addressing the ethical issues of transparency, ensuring that explanations provided are not only technically accurate, but also ethically sound, especially in cases where decisions impact individuals’ lives, will be an important step in the future.

Data security

Given the nature of these emerging technologies, security needs to be a main consideration. This includes, for example, integrating strong security measures into AI systems, including encryption, access controls, and regular security audits, to protect against data breaches and unauthorised access. Data breaches happen every day, with high-profile breaches regularly hitting the headlines, reminding us that no organisation is immune to those risks. This will remain a key area where the importance of preparation and prevention cannot be underestimated.

Consent challenges

Due to the complex nature of these technologies, ensuring that a lawful basis is identified for the processing of data can be more difficult. This is particularly the case when relying on consent. Organisations must ensure that, when relying on consent, individuals provide informed consent, understanding the specific purposes and potential consequences of data processing in AI applications.

As well as ensuring that informed consent is obtained, it is important to ensure that ongoing consent is given. As AI applications evolve and potentially use data for new purposes not initially foreseen, this will become increasingly difficult to manage. Therefore, it is important to have a clear understanding of what the AI does and for what purpose, so that this can be managed effectively.

Cross-border data transfers

Organisations need to address the complexities of international data transfers when using AI technologies, including selecting lawful mechanisms such as the UK’s international data transfer agreement or European Union’s (EU’s) standard contractual clauses or binding corporate rules.

They also need to navigate data localisation requirements and restrictions in different jurisdictions that may impact the cross-border transfer of data.

Ethical use of AI

Implementing strategies to mitigate biases in AI algorithms, ensuring fair and non-discriminatory outcomes, will be necessary.

Building and maintaining public trust by adopting ethical AI practices, transparency and accountability measures, while at the same time aligning with societal expectations, will also be expected of businesses going forward.

Under UK data protection law, special consideration must be taken when processing personal data of children, with the ICO having issued guidance specifically focused on this. As part of that guidance, it is clear that organisations have an obligation to properly assess privacy risks to children – an obligation which may be more difficult to meet when using AI technologies.

Data subject rights and access

Organisations need to ensure transparent communication with individuals about how they can exercise their data subject rights in AI-driven systems, even when algorithms contribute to decision making.

Also important is addressing challenges in rectifying inaccuracies in AI-driven decisions, where the complexity of algorithms may hinder straightforward corrections.

Supplier management

When selecting AI technology suppliers, organisations need to conduct thorough due diligence, including assessing their data protection practices and ensuring contractual agreements align with regulatory requirements.

Further, it is important to establish mechanisms for ongoing oversight and accountability concerning data protection practices when third-party suppliers are involved in AI development or services.

Incident response and accountability

Comprehensive incident response plans specific to AI-related incidents will need to be developed, considering the unique challenges posed by the complexity of AI systems.

Part of this process involves defining clear accountability mechanisms for AI systems, including roles and responsibilities for compliance with data protection laws, in the event of an incident.

These challenges underscore the need for careful consideration when balancing the advantages of emerging technologies like AI with the legal obligation on businesses to ensure robust data protection law compliance. The rapidly evolving digital space and the dynamic nature of AI technologies presents intricate hurdles in achieving compliance, and as AI applications continue to evolve, critical issues such as algorithmic transparency, consent complexities, and ethical implications of automated decision making will likely come to the forefront. Against this backdrop, only one thing seems obvious: a thoughtful and adaptive approach will be required if organisations want to responsibly harness the benefits of AI within the parameters of UK data protection law practices.

 

Jennifer Barr is a senior associate and Victoria Guild is an associate at CMS Cameron McKenna Nabarro Olswang LLP. Ms Barr can be contacted on +44 (0)141 304 6233 or by email: jennifer.barr@cms-cmno.com. Ms Guild can be contacted on +44 (0)131 200 7509 or by email: victoria.guild@cms-cmno.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.