Sök

AI, arbitration and the black box problem

I want to live in a world where all disputes can be resolved fairly by all relevant facts and legislation in a matter of seconds. Certain descriptions of AI make promises that one day I will. These are equal parts utopia and dystopia, but we are not there yet.

Today’s AI applications are made to order, meaning they are made to solve specific problems using specific data. This may change in the future, but  for now, the problems it can solve are not yet as broad as “decide this case”, but rather “find this specific type of document in an enormous database” or “find me relevant case law on this topic.” It still only assists and not replaces humans in the judicial process.

Very simplified, AI solves problems through training on datasets, which means analysing vast amounts of data to understand trends and replicating behaviours. In our field, this data could be facts from previous cases and awards. The challenges of replicating potential human bias in such datasets are well known and discussed. Human bias is unfortunately predictable, which makes it possible to find and correct for.  

There are other blind spots of AI that are harder to detect.

Generally, we don’t know exactly which data AI bases its conclusions on. If you scan a large enough data set for trends, you can often find strange correlations. For instance, there is a 95% correlation between the per capita cheese consumption in the US and the number of people being strangled by their bedsheets, not to mention the 99% correlation between divorce rate in Maine and the amount of margarine consumed in the years 2000-2009. AI would not necessarily know to disregard these correlations. It can’t determine which facts are relevant—the difference between correlation and causation. This is often referred to as the black box problem of AI.

In a University of California experiment, AI was trained to differentiate huskies from wolves. It tested well but generated some strange mistakes. When examined further, it turned out that in the training data, all pictures of wolves had snow in the background, whereas the pictures of huskies did not. Hence, it concluded that wolves are four-legged, hairy creatures that walk on snow, and dogs are those that don’t.

The ability to separate relevant from not relevant is arguably a core strength of the human mind over AI.

The first areas where we are starting to see AI being applied in disputes are non-complex, repetitive matters with large volumes of cases. Take the example of parking tickets appeals. There are interesting AI applications already in this field. Without going into the specifics of any existing application, imagine that you train the AI on previous parking violations to teach it correct and incorrect parking. Since we don’t know what data it assesses, we don’t know whether it looked at all the pictures and concluded that you can only have a parking violation when it’s sunny or when the car is photographed from behind.

Naturally, this lack of transparency is not ideal in the judiciary world.

These are well-known challenges in AI, and there’s a movement towards Explainable AI, where the result and methodology of AI can be understood by human experts. Provided with better transparency in the methodology, we might be able to get at least one step closer to that fair and unbiased super judge I described in the beginning. But there are still many steps to go.

 

Lise Alm

Head of Business Development

Prenumerera på vårt nyhetsbrev och våra eventutskick här