Automated decisions under the GDPR and the AI Act: who can control their fate?

March 2024  |  SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY

Financier Worldwide Magazine

March 2024 Issue


Shakespeare wrote in Hamlet: “Our wills and fates do so contrary run, that our devices still are overthrown.” Was he showing prophetic tendencies, foreseeing the rise of artificial intelligence (AI) and discussing the ability of humankind to control its destiny? What would Shakespeare have made of the European Union’s (EU’s) General Data Protection Regulation (GDPR), now rapidly approaching its sixth anniversary, or the soon to be enacted Artificial Intelligence Act (AI Act)? Would he have agreed with the inevitable focus by regulators and courts on how to regulate decisions taken by computers which impact individuals’ rights and freedoms?

The impact of these laws and regulatory focus will be more closely felt by the financial world than other industries, where decisions once taken by people will to a greater extent be taken and influenced by algorithms. This follows an industry push to incorporate AI into products, to make effective decisions quicker and to automate processes.

Typical scenarios include credit decisioning, calculating remuneration, hiring and firing, and profiling and segmenting customers to determine how they are treated. While the GDPR has already impacted how these activities can be carried out, some recent cases and the new AI Act are bringing ever more scrutiny.

The significance of the GDPR

The GDPR and its Brexit sibling, the UK GDPR, impact businesses that operate in, sell to or monitor individuals in the European Economic Area (EEA) and the UK. They place a variety of obligations on businesses to ensure that processing is fair, that individuals understand what data and how data is used, and that such use is lawful.

A specific obligation under article 22 of the GDPR concerns the circumstances in which decisions are permitted to be taken by automated means without human involvement, and any appropriate safeguards that businesses should undertake to protect individuals.

These rules do not impact all automated decisions. Unfortunately, the boundaries of this rule are not clearly delineated in the GDPR itself through practical examples, and so as automated decisions become more prevalent, we must look to the courts and regulators for guidance, particularly on the meaning of a “decision based solely on automated processing” and also about how “significant” an effect a decision must have or whether it has “legal effects”, which are discussed below. Automated decisions which come within the rule are prohibited unless certain conditions are satisfied.

First, a 2023 decision, frequently referred to as the SCHUFA case, by the Court of Justice of the European Union (CJEU) gave a broad interpretation of ‘decision’. Prior to this decision, it would have been easy to think that only the point at which a determination is made about an individual (such as a decision to approve a loan) ought to be the ‘decision’, rather than any preparatory calculations.

But the CJEU found that a probability value provided by a credit reference agency constitutes a decision, where the third party that makes a loan decision ‘draws strongly’ on that probability value to establish, implement or terminate a contractual relationship. The court’s reasoning was that, if the credit reference agency was not subject to the automated decision making rules, then the individuals about whom decisions are made would not be able to exercise their rights, including the right to obtain meaningful information about the logic involved (because the lender would not have it) and to contest the decision.

The impact of this decision is that a greater number of participants in the financial and data markets, who contribute data, scores and profiles, may come within the rules on automated decision making. Such organisations will then need to consider how to mitigate these impacts through their contractual arrangements, provide understandable explanations to individuals about how decisions are made (breaking down what are sometimes complex algorithms into something understandable), and to have the processes and resources to allow individuals to contest decisions.

Second, decisions must have a significant enough effect on an individual to come within the rules. Judicial decisions to date have focused on areas whether there is a strong economic impact, for example around the withholding of social security payments, and removing permission for someone to work. But there have also been cases around the production of algorithmic fraud detection tools which lead to a risk report about an individual.

For financial institutions (FIs) and their partners, this means that scrutiny should be given to situations not just where there is a significant financial impact directly emanating from an automated decision taken by the organisation, but also where a report is being created about an individual or group of individuals which could then be used by that organisation or one of its partners to, for example, close an account or not offer financial products in a way that has a significant detrimental impact on an individual.

Of course, financial products are not the only area to be impacted. Organisations should also consider their recruitment and remuneration practices where these involve, for example, recruitment and bonus decisions being taken without meaningful human involvement – rubber stamping a decision or basing a decision solely on the automated rankings of individuals is unlikely to meet GDPR requirements.

These are just some of the issues arising from the GDPR’s rules on automated decision making. It is important to note that the general prohibition on making automated decisions which have legal or similarly significant effects does not apply where such a decision is necessary for entering into or performing a contract, where the decision is authorised by law, or where the individual has provided their explicit consent. As you would expect. these exceptions also have their nuances.

Evaluating the level of risk under the AI Act

At the time of writing, the final version of the AI Act is about to be published. The AI Act will prohibit certain uses of AI and place restrictions on others. From the perspective of financial organisations making decisions about individuals through the use of algorithms, below are three of the more impactful provisions and what they mean.

It is worth noting first that while the AI Act is an EU law, companies operating across borders may be impacted even if the organisation is not itself established in the EEA. This is because the scope of the AI Act, much like the GDPR, extends not just to organisations in the EEA, but also those outside of it. The scoping provisions are complex, but in brief, organisations deploying AI in the EU, or where the output is used in the EU, as well as importers, distributors and authorised representatives, may be impacted.

First, the AI Act will prohibit eight specified uses of AI entirely. From the perspective of automated decision making for FIs, the most notable are, in summary: (i) AI systems using subliminal, manipulative or deceptive techniques which distort behaviour and impair informed decision making, likely to cause significant harm; (ii) evaluating or classifying individuals or groups based on their behaviour or characteristics with a social score leading to detrimental or unfavourable treatment that is unjustified or disproportionate; (iii) certain uses of biometric and facial recognition; and (iv) inferring emotions in the workplace.

Second, the AI Act places restrictions on ‘high risk’ AI systems. One criterion that organisations will need to consider is whether the AI system poses a significant risk of harm, including by not materially influencing the outcome of decision making. Such systems can, however, be used to undertake preparatory tasks for assessments in some use cases, to improve the results of previously completed human activity, perform narrow procedural tasks and detect decision-making patterns.

Finally, as with automated decision making under GDPR, human oversight will be an important safeguard. The level of oversight will of course vary, depending on the level of the risks, the level of autonomy and the context in which the AI system is being used. Training and support for those overseeing the high risk AI system will be key.

For those who winced at the size of fines under the GDPR, take note that the potential for fines under the AI Act is greater. As with the GDPR, the fines are staggered depending on the type of violation, but the maximum fine is €35m or up to 7 percent of total worldwide annual turnover for the preceding financial year, whichever is higher.

Other more foreseeable costs include updating contractual terms with customers and service providers, and developing and implementing new data governance policies to maintain data integrity and trust in the decision-making process. The use of AI impact assessments to evaluate the risks are likely, as now with data protection impact assessments under GDPR, to become commonplace.

Deciding on risk mitigations

Banks, lenders and service providers in the financial ecosystem, when making decisions or helping others to make decisions which impact individual customers and employees, will need to consider their exposure under the GDPR and the AI Act. First, they need to understand whether their organisation and its products and services are within scope. Second, they need to identify and evaluate the systems and processes to understand and mitigate risks. And third, they need to demonstrate compliance through robust documentation, transparent communication and human oversight.

As the use of automated decisions increases, like Shakespeare’s Othello, individuals served by and serving in the financial markets will be asking, who can control their fates? Ensure your organisation can too.

 

Robert Fett is a senior associate at Hogan Lovells. He can be contacted on +44 (0)20 7296 5312 or by email: robert.fett@hoganlovells.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.