AI In Dispute Resolution – Can AI Replace Human Judges, Lawyers And Experts?
In the 2022 Australian Open tennis tournament, Rafael Nadal and Daniil Medvedev battled it out at the men’s finals. At one point in the match, the score read 6-2, 7-6, 3-2 and 40-0 in favor of Medvedev and an artificial intelligence (AI) model predicted he had a 96% chance of victory.
At that stage of the match, such a prediction did not seem unreasonable. Nadal had recently recovered from COVID and surgery, and Medvedev was overwhelmingly in the lead. But in a surprise finish, Nadal then won the third, fourth and fifth sets, eventually winning his second Australian Open and 21st Grand Slam title.
While some commentators lauded the victory of humans over AI, it was not so much a failure of AI as an illustration of its inherent limitations. AI simulates human intelligence and problem-solving capabilities – and most humans likely would, on the basis of the available evidence at that moment in time, have predicted a Medvedev victory. AI works by applying logic and probability to quantitative data. It is, however, less good at dealing with variables that are more subjective or qualitative and therefore difficult to measure.
In this article, we discuss the various AI tools that have already been or are expected to be applied in disputes[1], as well as their benefits and limitations. Further, we explore the use of AI in business valuation.
The use of AI in disputes
In the context of dispute resolution, the type of AI typically under consideration is machine learning, whereby an AI system is designed to identify patterns in the training data passed through it and generate outputs by using those patterns. The system calibrates its pattern mapping as more data is passed through. Although the training data is selected by humans, the subsequent pattern mapping is automated.
Existing and possible uses of AI in the context of disputes include:
i. Disclosure – As the volume of electronic disclosure has increased, AI-based tools have been used to manage the associated time and cost. Predictive coding tools, which were first endorsed for use in English High Court litigation in 2016, rely on an initial review by humans of a sample set of documents. The tool then analyzes the features of the relevant documents (such as keywords or document type) and the coding decisions of the reviewers and identifies similar documents ranked by potential relevance.
ii. Legal research – Legal database tools already apply language-processing algorithms to maximize the relevance of searches for legislation and case authorities based on words and phrases.
iii. Drafting legal documents (“robot lawyers”) – AI tools such as ChatGPT can already be used to generate answers to specific questions. One can see the possibility that, in future, litigation-specific AI tools may be available, which are able to draft more substantive legal documents, such as pleadings. Similarly, as discussed further below, there are also potential applications for AI tools in connection with expert evidence, particularly where the evidence is heavily reliant on data.
iv. Predictive analytics – AI tools are capable of gathering and analyzing large volumes of historical information and consequently have the potential to automate, or at least facilitate, the analysis of precedent in order to predict future outcomes in litigation. However, a key limitation of AI tools is that they rely on quantitative analysis (i.e. patterns in data) to generate output and are therefore less sensitive to more qualitative factors, which may be critical for a judge determining the outcome of a case.
While an increase in the application of AI tools in certain aspects of litigation is inevitable, it is important to be aware of its limitations in relation to more judgmental aspects such as legal research and analysis, drafting of legal arguments, prediction of judicial outcomes and even deciding cases. AI-generated content is only as good as the (human-generated) data that the tool is trained on, so there is a risk that output is based on incomplete or biased data. Similarly, there is a lack of transparency as to how the tool generates its answers. Finally, the way in which AI arrives at its answers using data analytics is different from the way in which litigation outcomes are determined, using human reasoning to weigh up various factors both quantitative and qualitative.
In December 2023, guidance was issued to judges on the use of AI in litigation in England and Wales (“AI Judicial Guidance”). This guidance states that the use of generative AI (GenAI) to assist judges and their assistants in research or preparatory work to produce judgments is permitted as “a potentially useful secondary tool,” provided the guidelines are appropriately followed. While this guidance is officially addressed to judges, the risks and limitations of AI tools in disputes are also relevant to legal representatives, litigants in person and other professionals appointed as experts in legal proceedings.
The key points set out in the AI Judicial Guidance are as follows:
1. In principle, judges and parties/legal representatives may use GenAI provided it is used responsibly and with appropriate safeguards. However, the use of generative AI for legal research and legal analysis is not recommended, due to the inability to verify the results independently or produce substantive reasoning.
2. Before using any AI tools, judicial office holders should ensure they have a basic understanding of their capabilities and potential limitations.
3. Users of public AI chatbots or similar tools should be mindful of confidentiality as questions and information submitted are retained as public data and used to train the AI.
4. AI-generated content must be checked before it is used or relied upon. Answers provided by AI tools may be inaccurate, incomplete, misleading, out of date or applicable to the wrong jurisdiction.
Users should be aware of the risk of errors and biases in the datasets used in training AI tools that deploy machine learning or large language models (LLMs). In addition, users should be aware of the possibility and potential challenges of “deepfake” technology as AI is now capable of producing highly sophisticated fake material, including text, images and videos.
Valuation and AI
We noted above the potential further applications for AI tools in connection with expert evidence, particularly where data-reliant evidence is involved. We will now consider how AI might be applied in the context of valuations, where the widely used market approach estimates the value of a company (or shares therein) with reference to the prices paid for other comparable companies or shares.
Machine-learning tools leveraging sophisticated algorithms have long been used in the finance sector to solve various problems, particularly in analyzing data patterns and making predictions. Whereas some researchers believe that the valuation process cannot be formalized, some recent studies[2] suggest that machine learning, which is continually being improved and advanced, might be applied as a more accurate, less biased and less costly approach to value a company and to identify peer firms.
Existing tools applicable to valuation are essentially those that employ statistical-learning algorithms with predictive functionalities. Similar to the application of LLM-based tools used in the legal field, statistical learning tools apply machine learning technology and are therefore subject to similar limitations to those discussed above, including the following:
• The analysis is quantitative and therefore neglects, or attributes limited importance to, qualitative elements.
• The tools are at risk of bias or a lack of robustness when limited data is available for inputs.
• The quality and accuracy of the outputs depends on the quality of the tools and how one engages with those tools (e.g. the metrics or questions being fed in).
Although the value of a company (or shares in it) is typically expressed in numerical, quantitative terms, the factors affecting its value are also qualitative.
Quantitative factors refer to financial metrics that reflect the revenue-generating capacity, profitability, asset and liability position, and cashflows of the company. Qualitative factors are those that cannot be expressed in numerical terms but are elements that are relevant to the value placed on a company by the market in terms of future potential, such as the quality of management, the diversity and loyalty of customer groups, competitive advantage and brand recognition, and proprietary technologies.
Where the valuation is of a shareholding in the company (rather than the business as a whole), there are further qualitative factors to consider in determining the appropriate value, such as the influence and/or control which is afforded to the owner of that holding and the distribution of the remaining shares.
Ultimately, business valuation is a principle-, rather than rule-based exercise to estimate the value of a company (i.e. what a willing buyer and a willing seller would agree upon in an arm’s length transaction, each acting in full knowledge and without compulsion). A rule-based tool may be useful for certain aspects of a particular valuation approach, e.g. selecting peer groups under the market approach, but it cannot – at least not yet – substitute for the critical analysis and coherent narrative which the valuer brings to all of the elements relevant to the valuation.
Concluding remarks
Both humans and AI tools have their own strengths and limitations. For the time being, content generated by any AI tool based on machine learning should be used with caution. A critical mind should be applied before any such content is relied upon as it may be incomplete, subject to inaccuracies, susceptible to bias and potentially based on fake material.
In the context of disputes, extra caution should also be exercised given the confidentiality and sensitivity of the information required to be input into an AI tool. While AI has already been widely used for certain aspects of litigation involving the reviewing and processing of large quantities of data/content, such as disclosure, it is currently recommended that AI not be used for aspects requiring greater judgment and consideration of a broader range of qualitative and quantitative elements, such as drafting of legal documents, outcome prediction and decision making.
Endnotes
[Editor’s Note: Nikki Coles is a Managing Director and Lucia Yau is a Director with Alvarez & Marsal’s Disputes and Investigations in London. Any commentary or opinions do not reflect the opinions of Alvarez & Marsal or LexisNexis®, Mealey Publications™. Copyright © 2024 by Nikki Coles and Lucia Yau. Responses are welcome.]
*This publication was first featured in Mealey’s® Litigation Report.