Rising risks: navigating the growing landscape of AI litigation in the UK

June 2025  |  SPECIAL REPORT: INTERNATIONAL DISPUTE RESOLUTION

Financier Worldwide Magazine

June 2025 Issue


Artificial intelligence (AI) is no longer a futuristic concept – it is a core component of how modern businesses operate, powering everything from customer interactions to high-stakes decision making. As the technology becomes more sophisticated and more deeply embedded across industries, so too do the risks associated with its use.

Among the most pressing of these is the growing threat of litigation. Whether it is a dispute over how AI systems perform, a regulatory crackdown or a mass claim sparked by a single error replicated at scale, the legal landscape around AI is becoming increasingly complex – and increasingly active.

This article explores the emerging contours of AI litigation in the UK, focusing on the rise in legal claims across sectors, the implications of heightened regulatory scrutiny in the UK and the expanding potential for group actions. It also considers how businesses can protect themselves by adopting forward-looking risk management strategies.

As the AI legal environment shifts from theory to reality, companies must move quickly to ensure they are not only compliant but resilient in the face of new and evolving legal challenges.

The recent surge in AI-related litigation

This surge in AI-related lawsuits is already evident across various sectors. While initial claims have centered on intellectual property rights and the development of AI technologies, recent cases are increasingly addressing the use of AI in business operations. Legal actions now involve allegations of breaches in contract, tort or even consumer protection.

For example, the recent case in the UK of Leeway Services Ltd v Amazon concerned claims arising from the wrongful suspension of a company’s online platform due to the malfunction of AI systems, while Tyndaris SAM v MMWWVWM Limited concerned a dispute over the alleged misrepresentation of an AI system’s capabilities.

Although many of these cases have not yet reached trial, they signal a growing trend of legal challenges tied to the deployment of AI, and this trend may well intensify as the use of AI continues to permeate everyday business operations.

Litigation arising from increased regulatory scrutiny

In the UK, regulatory bodies are increasingly turning their focus to AI, recognising its potential for both harm and innovation. The Information Commissioner’s Office has issued warnings that it will be taking action where privacy risks have not been tackled prior to generative AI being introduced.

Meanwhile, the Competition and Markets Authority is scrutinising AI’s impact on competition and has identified AI as having the potential to distort competition by giving undue prominence to choices that benefit the platform at the expense of options that may be objectively better for customers and consumers or using AI to target selectively customers, effectively excluding new market entrants.

Increasing regulatory scrutiny has already resulted in AI litigation, with 1381/7/7/21 Justin Le Patourel v BT Group PLC an instructive example of this. The claim – that BT had abused its dominant market position by inflating prices for certain services – was brought following Ofcom’s review of the market.

Risk of group claims

An obvious litigation risk regarding AI is the potential for mass claims to be brought against AI developers and businesses relying on it. Because AI systems can process and act on data at such a rapid pace, mistakes can quickly impact a wide number of people before anyone notices.

Even if loss suffered could be small on an individual level, when those cases are combined, the total damage can be substantial, making this a serious area of concern. There have been several high-profile recent cases where claimants have tried to use the representative action procedure and collective proceedings in the Competition Appeal Tribunal (CAT) process to bring AI-related group claims.

Representative actions

A few years ago, it looked like representative actions may take off in the UK. However, a 2021 Supreme Court case put the brakes on them, at least in practice. The decision found that ‘opt-out’ representative class actions (i.e., where a party is included in the claimant group unless it opts out) cannot proceed unless the claimant proves material damage and shows that each class member is seeking the same compensation.

This hurdle to bringing a claim was illustrated in late 2024 when a representative action was brought in the UK on behalf of over 1 million individuals regarding the transfer of their private information to tech companies for the purposes of developing an app. The claimants tried to circumvent the issues raised by the need to prove the “same interest” but confining the damages to those of a nominal claimant whose damages represented the “lowest common denominator” of damage of all the claimants.

However, this claim failed – the claimants could not establish that everyone in the class had the “same interest” as an individualised assessment of the tort needed to be made in each case.

However, the door is not closed on representative actions. The Supreme Court has indicated that common issues across claimants can be considered during a first stage, on a representative basis, with claimants then pursuing individually any losses they suffered in reliance on that first representative action. Whether there are common issues will be a fact-specific question.

Collective proceedings

In recent years, there has been a notable increase in claims brought under the UK’s collective proceedings regime, enabling consumers to seek damages for anti-competitive conduct through the CAT.

This mechanism has been extensively used against technology companies, with claimants framing consumer-protection issues as competition law violations. However, in the landmark decision of 1602/7/7/23 Riefa v Apple Inc and Amazon.com Inc. in January 2025, the CAT refused to certify a collective proceedings order on the basis that the proposed class representative was unsuitable.

A key factor in the scale of such litigation is the availability for such claims to be funded. Collective actions are commonly financed by litigation funders. However, the 2023 decision of PACCAR v CAT determined that litigation funding agreements (LFAs) – contracts between litigation funders and claimants which allow the funder to recoup a portion of the proceeds recovered by the claimants if the litigation is successful – were damages based agreements (DBAs) which need to adhere to certain statutory requirements.

This made the LFA unenforceable. Numerous funding arrangements underpinning active claims before the CAT are structured in this manner; this presents significant issues, as the Competition Act 1998 prohibits the use of DBAs in opt-out competition class actions.

There are currently a number of claims in the CAT in which the defendants are challenging the validity of LFAs agreed between funders and class representatives. The previous UK government was intending to legislate to reverse the PACAAR decision, however the draft bill did not make it through the legislative process in the time allotted prior to the general election. There has been no indication to date from the new government that it will take a similar course.

Mitigating litigation risk

In light of the increased litigation risk regarding the use of AI, companies may want to consider the measures outlined below to reduce their exposure.

AI awareness and oversight. Establish clear frameworks for AI oversight, along with educating staff on responsible AI use, which may help prevent issues before they arise and reduce the chances of legal disputes.

Responsible data practices. Implementing robust data governance processes allows businesses to better manage the risks associated with using data in the creation and operation of AI systems.

Managing AI across the supply chain. Legal risks tied to third party AI tools or services can be minimised through careful contracting – this includes using warranties, indemnities and setting clear terms of use. Similarly, organisations providing AI solutions can protect themselves by defining how their products can be used and setting boundaries through contractual agreements.

Conclusion

As AI technologies become further embedded in business operations, the legal landscape surrounding them is evolving at pace. The surge in AI-related litigation, coupled with increasing regulatory scrutiny and the growing potential for group claims, signals that businesses can no longer treat these risks as hypothetical.

From challenges over AI system performance and contractual breaches to concerns over competition and data protection, companies are facing an expanding range of legal threats. This dynamic environment demands a proactive approach – not only to compliance but to anticipating how AI might give rise to disputes.

To navigate this terrain effectively, organisations must take a strategic and holistic view of AI risk management. Strengthening internal governance, ensuring responsible data practices and clearly defining roles and responsibilities across digital supply chains are all critical steps in reducing exposure.

At the same time, businesses should stay attuned to evolving legal standards and funding structures for group litigation, which could significantly shape the future of AI accountability. Those that act early to integrate legal foresight into their AI strategy will be far better positioned to innovate confidently while staying within the bounds of emerging law.

 

Chiraag Shah is a partner and Julius Handler is an associate at Morrison Foerster. Mr Shah can be contacted on +44 (0)20 7920 4176 or by email: cshah@mofo.com. Mr Handler can be contacted on +44 (0)20 7920 4021 or by email: jhandler@mofo.com.

© Financier Worldwide


©2001-2025 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.