Can computer programs resolve legal disputes? For decades, the answer from much of the legal community has been no. However, developments in artificial intelligence (AI), and in particular natural language processing and machine learning, have led to renewed discussions of this possibility. Increasingly, tools are being developed to assist parties with litigation outcome prediction and judges with litigation outcome determination. However, while some argue that the use of AI in legal disputes can reduce the length of proceedings, cut costs, and improve access to justice, others raise concerns that "black box" AI systems could reduce transparency, entrench bias, and harm the development of the law.

Litigation Outcome Prediction

The use of computers to predict the outcome of legal cases is not new. As early as the 1980s, researchers developed outcome prediction tools, often in the form of decision-tree algorithms. However, developments in AI have allowed the creation of more sophisticated prediction models. In 2017, a model built by Katz et al. predicted US Supreme Court decisions with an accuracy of 70.2%, while in 2019, a model built by Medvedeva et al. predicted decisions of the European Court of Human Rights with an accuracy of 75%. In various studies, AI tools have been able to predict case outcomes more accurately than expert lawyers. Companies such as Solomonic and Lex Machina, owned by LexisNexis, now provide commercial litigation prediction and analytics tools.

Outcome prediction tools can be used by parties and their legal representatives to craft arguments and facilitate settlement negotiations, or by third-party litigation financers to assess the risk of providing funding. More broadly, outcome prediction may be used by the likes of insurance companies to help calculate claim payouts. However, those using such tools must take care to ensure that they do not breach any professional or legal obligations. For example, France has passed legislation prohibiting the use of AI tools to analyze judicial behavior. Lawyers may also need to consider if their liability insurance covers negligence or malpractice claims arising as a result of their use of AI.

Robot Judges

A more controversial matter is the use of AI by courts to make decisions. Last month, Sundaresh Menon, Chief Justice of the Supreme Court of Singapore, gave a speech stating that Singapore is unlikely to use AI in adjudication, whereas in the UK, the Master of the Rolls, Sir Geoffrey Vos, has suggested that in time certain judicial decisions may begin to be made by AI.

For now, there are few, if any, courts that use automated decision making. However, the use of AI to assist judges is not uncommon. In the US, many courts use the COMPAS system to help determine criminal sentences, a practice that has been upheld by the Wisconsin Supreme Court. Malaysian courts have experimented with similar AI systems. More recently, judges in Colombia and Pakistan have used ChatGPT to assist with preparing judgments. By far the most advanced case study is China, where AI is used in a number of "smart courts" to automate transcription, analyze evidence, recommend decisions, and monitor the consistency of judgments with past case law.

Benefits and Risks

The use of AI to help resolve legal disputes could have several potential benefits for the parties. A large number of legal disputes are small in value and involve relatively simple issues. The ability of AI to resolve such matters quickly, cheaply, and consistently will save costs, broadening access to justice, and time, cutting court backlogs. Even in higher-value disputes, the use of predictive AI may assist the parties in reaching an earlier settlement, avoiding the cost and risks of a full trial.

There are, however, reasons for caution, which explain why courts in most jurisdictions have been slow to take up AI. First, the development of reliable AI tools depends on the digitization of large volumes of legal precedent. Many jurisdictions, including some in Southeast Asia, lack well-managed depositories for court judgments. Where tools can be built, training AI models on existing datasets creates the risk of perpetuating bias and could hinder the development of the law through the application of precedent to new facts. There is also the risk that private parties using AI tools to settle disputes could seek to adjust the AI in such a way as to benefit their claim, although legislation, such as the Computer Crimes Act in Thailand, may protect against this.

More broadly, AI tools do not engage in legal reasoning. Lex Machina, for example, makes accurate outcome predictions using information about cases, such as the judge, the lawyers, and the issues, rather than by analyzing the text of past case law. The lack of transparency and understandable logic may inhibit parties and judges from challenging automated decisions. The removal of judges from the decision-making process could also lead to a loss of human discretion, which is key to ensuring that judgments are both fair and seen by the parties to be so.

Lawyers and parties to legal disputes should keep updated on new AI tools that they can use to their advantage. However, for now at least, it seems human judges and physical court rooms will continue to be the norm.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.