Legal decisions and AI—are judges really that predictable?

 

46144244 - law concept: circuit board with courthouse icon, 3d render

A study recently found that artificial intelligence (AI) software can predict almost 80% of outcomes in human rights cases. Joanne Frears, partnerat Blandy & Blandy, considers the implications of this study and the rising use of AI within law more generally, with contributions from Fern Tawera, an LLM (human rights) student currently writing a thesis on AI and human rights.

 

An AI project recently predicted the outcomes of hundreds of European Court of Human Rights’ cases with an accuracy of 79%. How was the AI project able to achieve this accuracy?

In very general terms, the researchers for this study text-mined specific sections in published judgments of the European Court of Human Rights. By identifying frequent or recurring words used in the judgments, the researchers were able to create N-grams (bags of words) which were synonymous with particular case outcomes. These N-grams were then used to train an AI computer to identify those words again and, based on recurrence of those ‘bags of words’, to train it to predict the decisions in these cases.

Obviously machine learning means a computer can be taught how to predict an outcome that is determinative, such as 1+1=2. Provided the correct information is fed into it (1+1 in this scenario) then the answer learnt should always be 2—this is not ‘accurate’, this is inevitable. The particularly interesting issue here is how a huge amount of data can be trawled, extracted and predicted with such accuracy? It seems that judges are creatures of habit and use the same ‘bags of words’ in assessing and determining cases—although this is only half the story as those words are determined by the legislation and the application of the legislation to specific facts, meaning that the recurrence of pre-determined word sets should be capable of assessment, and that the outcome when words are used together should be predictable.

Is there anything particular about human rights cases that allowed the AI software to make predictions with such accuracy? Are these cases more simplistic or complex, for example, than other cases?

This project focused on whether there had been a violation of any of articles 3, 6 or 8 of the European Convention on Human Rights (ECHR). The reasoning behind the use of these articles was that they provided the largest data set. The larger the data set, the more training can be undertaken and, with each learning, the more accurate is the computer’s ability to predict an identical outcome.

The exercise overall does not suggest that these cases are more simplistic than others, but the precise nature of the language of the legislation and the parameters of the offences might make accurate prediction of an outcome more likely. The authors of the study point out that their ‘empirical analysis’ indicates that the facts of these cases appear to be the most important factor in predicting the outcome of a case. This could of course be a circular argument—as any barrister will tell you, you only find out what you want to when you limit the questions you ask.

It is possible that if an AI computer was set to work to judge cases, the outcomes would be less predictable as it would act in a way which interprets the law based on past cases and neither looks to:

  • make new law
  • develop existing principles, or
  • permit any emotional response to the facts to make them fit an outcome it believes is ‘right’

As insightful as this sort of study is in advancing our attitudes towards how technology can be applied to the law, it doesn’t help us decide whether AI could be used to determine matters such as ‘passing off’ where a level of value judgement is required to assess whether products ‘look the same’ as well as ‘share the same characteristics’ or whether an AI jury that has learnt how to act as a jury could assess a criminal matter in the same way as a human jury of our peers? We should not, however, imagine that AI cannot learn this. In game two of the Google DeepMind Challenge Match the world watched in amazement as AlphaGo played a move that it had not been taught, was utterly unpredictable but beautifully strategic and which set up a win, 20 moves later. AI experts thought that this type of learning was at least ten years away, but it shows that AI is already capable of outstripping its tutors.

The authors of this experiment suggest that law firms are increasingly turning to AI to wade through vast amounts of legal data. What does this tell us about the role of AI and automation more generally in the future of the legal system? Can you imagine, for example, AI representation, AI juries or even AI judges?

The authors acknowledge that AI can be useful across the legal profession and on both sides of the bench, as a tool to rapidly identify cases and extract patterns that lead to decisions being made one way or another. Obviously AI can be an incredibly valuable tool in terms of wading through massive data sets and, having relied for years on teams of paralegals, assistants, associates and partners to do this work, the legal profession is perhaps behind the curve in terms of adopting tools to do this more easily and more accurately. Some law firms do already use proprietary apps and AI to:

  • assess flaws in legal contracts
  • predict risks for clients, and
  • give insurers an accurate indication of the likelihood of success

As a profession generally, we are required to wade through, assimilate and process vast amounts of data and to give our clients the benefit of our learning in the form of understandable advice. The advice we give is sometimes counter-intuitive and often has to be ‘creative’ to get a solution for a client whose expectations differ from the commercial realities or the strict letter of the law. AI can be taught empathy, but not cunning and this is where, for now, the human lawyer’s brain can surpass AI.

What are the advantages of the increasing role of automation in the legal system?

In general automation can reduce costs and increase certainty. For clients who require near-certain outcomes or business-critical advice, AI can be a vital tool to assist in giving this degree of clarity and insight. The extent of the information it can process, learn from and draw upon to reach an informed outcome is huge and is more transparent and predictable than the ‘feeling’ that many lawyers rely on. As immediate access to vast libraries of information and the ability to become ‘expert’ via ‘Professor Google’ undermines all traditional professions as gatekeepers of knowledge, clients are going to want to see advisors using the vast processing power of AI and the toolkit it offers to give certainty, even if just to test the advice given by their trusted lawyer.

Are there any downsides to a more extensive role for AIs and automation within the legal system?

We did a straw poll of this question in the office and the outcome (probably just as predictably) varied from lawyer to lawyer. The ‘yes’ camp said the downsides are in using a computer without emotional intelligence and argued that AI can never compete with the human touch that clients seek in times of difficulty or conflict. The ‘no’ camp said it can help resolve entrenched positions where time, money and court resource are wasted on futile or vexatious cases. The answer probably lies in the middle—AI should be used:

  • to speed up matters
  • to save money
  • for initial queries
  • bulk litigation or class actions, and
  • to undertake a whole host of operations that require an eye for detail

AI can also be used to determine legal strategies and predict the outcome of each. It perhaps would be less effective in private client matters—at least until its emotional intelligence matches its IQ.

Are there any other developments that practitioners interested in the future role of automation should be aware of?

Our profession is changing. The first digital generation still embraces face to face communication but sees no need for paper. Millennial clients and entrants to the profession embrace technology in a way that leaves most partners looking Jurassic. ‘Generation tag’ takes a completely different approach to ‘privacy’ and ‘confidentiality’ and so it goes on. Law is, after all, a communications business and if we want to continue to be able to communicate with our clients and engage with them in the manner that they expect, we cannot close our minds to the massive developments that big data, automation, AI and robotics offer us—or to the risks they present for our clients.

Interviewed by Alex Heshmaty. The views expressed by our Legal Analysis interviewees are not necessarily those of the proprietor.

Interested in automation and law? 

1. Automated law—the mindset of the legal profession

2. Automated law—law firms developing technology ‘in-house’

3. Automated law—regulating technology in the legal industry  

Relevant Articles
Area of Interest