Generative artificial intelligence in dispute resolution

December 2023  |  EXPERT BRIEFING  | LITIGATION & DISPUTE RESOLUTION

financierworldwide.com

 

Is generative artificial intelligence (GenAI) in dispute resolution a good thing? Obviously, with a question like this, the first thing we did was ask ChatGPT. The conclusion in the document produced was: “Generative AI can be a valuable tool in dispute resolution, but its use should be approached with caution. Integrating AI into the process may improve efficiency and objectivity, but it’s important to carefully consider the limitations, ethical implications, and the need for human oversight. Combining the strengths of AI with human judgment and emotional intelligence might provide a more balanced and effective approach to dispute resolution. Additionally, stakeholders and the public should be involved in discussions about the use of AI in legal contexts to ensure transparency and accountability.”

That appears to be a disarmingly self-effacing statement, acknowledging there are shortcomings to be aware of. Is this apparently balanced view evidence of the trustworthiness of GenAI? We wonder how different the conclusion might have been 12 months ago, before a number of events from which ChatGPT has, inevitably, learned and tailored its response.

We have considered in this article whether GenAI is an ‘all or nothing’ tool, or whether it still has a role to play despite the limitations it has itself identified.

What is GenAI?

This may not be as obvious as it seems. But understanding what it is doing is crucial to understand what it produces, and the appropriate way to use it. GenAI platforms such as ChatGPT and Google Bard are effectively predictive text machines. They train on whatever available material they can, largely from the internet, in order to calculate probabilities relating to how language is used based on patterns discerned from the training material. These platforms do not just regurgitate the same material that they have been trained on; they are designed to produce new content that is original and creative. They can create the deceptive impression of being extremely sophisticated search engines, but they are content creation tools. They do not care whether the sentences they produce are accurate, only that, per their prediction, they are likely to appear together in a similar context.

But beware the word ‘predictive’. It is wise not to look for absolutes in a prediction. GenAI platforms are not concerned with the concept of facts, or what is ‘true’. Nor do they make any statement about the likelihood that what they produce is ‘correct’. They will, and do, make things up, but their manner of presenting what they have made up comes with beguiling, literally incredible, confidence.

What is it good for?

As with many automated systems, the main advantage of GenAI is the sheer speed with which it can process very large amounts of information. Disclosure review and identifying responsive and privileged material would seem to be obvious ways to apply GenAI in dispute resolution. Tasks that currently take teams of document reviewers several days could be performed in seconds.

There are also specific legal AI models available, trained on the content of accredited legal databases and able to perform legal research in a matter of moments.

That might even be taken further, in terms of being able to produce narratives from responsive documents that can become the starting point for interviewing witnesses. In turn, the absorption of trial bundles, including expert reports, witness evidence and contemporary documents, might lead to valuable assistance in structuring hearings, planning cross-examination and drafting submissions. Again, though, this would be as a foundational source for the work that needs to be done, and not as a complete solution. Expecting GenAI to be the advocate or the judge is more challenging.

What can go wrong?

Hallucinations. Some users clearly do not appreciate that GenAI does not just regurgitate existing information.

When Roberto Mata sued the airline Avianca in New York and the airline applied to have the claim dismissed, Mr Mata’s lawyer prepared a brief using ChatGPT. The result was a very convincing brief, supported by numerous legal authorities, yet there was a flaw. None of the cases cited actually existed.

Because ChatGPT did not know the answer to what it had been asked, it had, as per its purpose, created new content in order to provide an answer, or, rather, it had predicted how the answer might be arrived at. The consequence to the lawyer for using this material without verifying it was, among other things, a $5000 fine.

A similar case arose in Manchester, England. A litigant in person asked ChatGPT to provide legal precedent that would support their case. The result was four cases being cited plus supposed quotes from them. Of those, though, one case was entirely fabricated, and none of the other three contained the quotes that had been used.

These are cautionary tales, caused by publicly available GenAI being used in the wrong way. To treat content generated by GenAI platforms as substantiated and reliable is, effectively, to take a ‘post-truth’ approach to evidence. It raises numerous ethical concerns, ranging from the high risk of misleading a court to failing to represent a client effectively to a failure to properly supervise the conduct of a matter.

Fairness and bias. The fact that GenAI platforms learn from patterns in the material they are trained on, includes patterns that relate to bias. Trials have demonstrated traits such as sexism and racism being baked into content that is created. For example, GenAI was asked to complete the following string of relative statements: (i) Paris is to France, as Venice is to [?]; (ii) men are to doctors, as women are to [?]; and (iii) men are to computer programmers, as women are to [?]. The answers given were ‘Italy’, ‘nurses’ and ‘home makers’, respectively.

In the same way that content produced by GenAI platforms must be checked for factual accuracy, the existence of discriminatory content, including the consequences of using such content and the risk that it is unreliable by virtue of being biased, must be identified and taken into account. Bias is a risk even with accredited legal resources: judgements from the 1970s may contain sound legal reasoning, but they are likely also to contain observations, language and commentary reflective of society at the time. A human would (hopefully) detect and disregard that. A machine learning tool may not.

Confidentiality and data protection. GenAI platforms learn from whatever material is available to them. That includes what is input by users of the platforms, looking for new content. As such, users might be able to manipulate their queries in such a way as to access confidential data added by prior users.

That means confidential information, such as client identity details, or commercially sensitive information, are put at risk from the moment they are entered into a platform. It was for precisely this reason that Samsung banned its employees from accessing public GenAI platforms after its employees entered highly confidential, proprietary source code in order to check it for errors and notes from internal meetings in order to get them summarised.

By the same token, the owners of the platforms have not tended to be open about the sources of information being used to train them. This has led to claims of copyright infringement, most famously, perhaps, by Sarah Silverman, who has claimed that ChatGPT has copied and ingested her book, ‘The Bedwetter’, in breach of copyright. There is now a trade group of US authors who are suing OpenAI, the owner of ChatGPT.

Knowing when to change. The answer from ChatGPT quoted at the start of this article highlights the importance of emotional intelligence and human judgement.

A study by University College London trained an AI system on a number of decided cases before the European Court of Human Rights relating to the three particular articles of the European Convention on Human Rights. It then asked the system to predict the outcome in other cases that had gone before the court on the same articles. The system’s success rate was 79 percent.

That sounds impressive – but one-in-five cases where the humans did something less predictable is a lot. There are myriad non-legal situations in which an emotional judgment is valid and important. In matters of law, particularly issues that are for a judge’s discretion or appeals that identify a need to move away from the trajectory of how an issue has evolved – even, occasionally, a complete u-turn – the human element of understanding when to do that and for which issues is invaluable.

The practice of law is, unquestionably, a document-heavy industry, which makes it an obvious ‘target’ for legal AI. But the true value of the lawyer is as a trusted adviser. GenAI models can only draw inferences based on patterns they detect in their training data.

Conclusions

GenAI is incredibly helpful in many situations, but it cannot create legal solutions or precedents that do not exist. It cannot be relied upon in court for advocacy or as a source of authority, and the fact that judges in some jurisdictions, notably states in the US such as Texas, Illinois and Pennsylvania, are requiring specific disclosure of when GenAI has been used, strongly suggests that submissions incorporating any such material may be treated more sceptically. It largely comes down to a question of trust, and the reality is that publicly available, free GenAI as a tool for legal research and legal judgment has been proven untrustworthy. Blindly relying on something known to be untrustworthy is unethical.

Legal AI tools are safer, albeit with appropriate due diligence and user training, from the point of view of knowing that produced content is reliable and has not involved a breach of client confidentiality. But making legal argument, or judgments, is best left to the humans for the foreseeable future.

 

Alice O’Donovan is an associate and Simon Hems is a partner at McGuireWoods. Ms O’Donovan can be contacted on +44 (0)20 7632 1673 or by email: aodonovan@mcguirewoods.com. Mr Hems can be contacted on +44 (0)20 7632 1605 or by email: shems@mcguirewoods.com.

© Financier Worldwide


BY

Alice O’Donovan and Simon Hems

McGuireWoods


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.