Technologies harnessing artificial intelligence (AI) are becoming increasingly common in our everyday lives. Built-in smartphone assistants help us stay organized, applications suggest new music for us based on our tastes, and an invisible watchdog detects when our spending habits seem out of the ordinary. AI makes all of these services possible. Various industries have been “disrupted” by AI, and the legal industry is not immune. However, this disruption may be just beginning. Various applications of AI are permeating the legal sphere, with the potential to change the inner workings of courtrooms and law offices alike. 

Proponents of AI emphasize the benefits automation might provide to the legal process, from increasing access to justice, to mitigating the biases of human judges. Opponents of the incorporation of AI technologies into the courtroom warn against privacy concerns, and the false sense of objectivity that AI generates.

For supporters, AI holds the key to the complete transformation of the US legal system. At present, fewer than 50% of people have access to the legal system. Despite this marked lack of access, courts still suffer from incredible case backlogs, preventing them from giving each case sufficient individualized attention. Court systems in the United States and abroad have started to adopt AI technology, in an effort to relieve pressure on the courts and make justice more accessible. The Superior Court of Los Angeles County in California, the world’s largest court, uses Gina the Avatar to help residents handle their traffic citations. Although Gina does not learn from prior cases, as her programming operates based on predefined paths, she represents the starting point for more sophisticated automation. Estonia has plans to allow AI judges to preside over small claims disputes, and China and Australia have set up operating online AI courts. China’s courts have handled over three million cases, and 98% of the rulings have been accepted without appeal. These courts illustrate AI’s potential to independently resolve minor claims.

Advocates of AI in the courtroom believe that the technology possesses more than simply backlog-clearing capabilities. These supporters believe that AI can actually make judges fairer by serving as a check on their explicit and implicit biases, and as an objective risk assessment tool in pre-trial proceedings. Biases are unavoidable in any legal system with human decision-makers. A 2011 study showed that parole boards dole out harsher sentences an hour before lunch and an hour before the end of the day. The United States Sentencing Commission has reported that black men continue to receive on average 19.1% longer sentences than similarly situated white men. To combat these systematic biases, proponents of AI argue that artificial intelligence provides an alternate and ostensibly unbiased decision-maker. 

Conversely, critics of AI have called into question its purported neutrality. For example, criminal courts have already adopted COMPAS (short for Correctional Offender Management Profiling for Alternative Sanctions), an AI tool designed to determine a defendant’s risk of recidivism. COMPAS calculates recidivism risk scores for defendants to help judges determine whether defendants should be incarcerated or released prior to trial. COMPAS is controversial because its algorithm is proprietary, and cannot be independently verified. This lack of verification presents a serious concern, especially after a 2016 ProPublica investigation found that COMPAS risk scores were biased against black defendants. The machine learning algorithms that power AI technology like COMPAS are often just as biased as the data on which they are trained. Entrenched bias within the legal system affects the available training data for software like COMPAS, perpetuating existing biases under the guise of objectivity.

Federal regulators have recently taken notice of the potential for bias and discrimination within AI systems, introducing the Algorithmic Accountability Act in 2019. The bill would require large companies to audit their machine-learning systems for bias and discrimination, and then take corrective measures if issues arose. Courtroom AI use presents an opportunity for the development of new regulatory framework. Artificial intelligence in the courtroom has the potential to be a great equalizer. However, the prevailing biases of the legal system need to be resolved or at least controlled for within working datasets, and sufficient regulations still need to be put in place. Until then, artificial intelligence should remain confined to legal spaces involving rules-based and low-stakes decisions.