To date, there are approximately sixty risk assessment tools deployed in the criminal justice system. These tools aim to differentiate between low-, medium-, and high-risk defendants and to increase the likelihood that only those who pose a risk to public safety or who are likely to flee are detained. Proponents of actuarial tools claim that these tools are meant to eliminate human biases and to rationalize the decision-making process by summarizing all relevant information in a more efficient way than can the human brain. Opponents of such tools fear that in the name of science, actuarial tools reinforce human biases, harm defendants’ rights, and increase racial disparities in the system. The gap between the two camps has widened in the last few years. Policymakers are torn between the promise of technology to contribute to a more just system and a growing movement that calls for the abolishment of the use of actuarial risk assessment tools in general and the use of machine learning-based tools in particular.
This paper examines the role that technology plays in this debate and examines whether deploying artificial intelligence (“AI”) in existing risk assessment tools realizes the fears emphasized by opponents of automation or improves our criminal justice system. It focuses on the pretrial stage and examines in depth the seven most commonly used tools. Five of these tools are based on traditional regression analysis, and two have a machine-learning component. This paper concludes that classifying pretrial risk assessment tools as AI-based tools creates the impression that sophisticated robots are taking over the courts and pushing judges from their jobs, but that impression is far from reality. Despite the hype, there are more similarities than differences between tools based on traditional regression analysis and tools based on machine learning. Robots have a long way to go before they can replace judges, and this paper does not argue for replacement. The long list of policy recommendations discussed in the last chapter highlights the extensive work that needs to be done to ensure that risk assessment tools are both accurate and fair toward all members of society. These recommendations apply regardless of whether machine learning or regression analysis is used. Special attention is paid to assessing how machine learning would impact those recommendations. For example, this paper argues that carefully detailing each of the factors used in the tools and including multiple options to choose from (i.e., not just binary “yes-or-no” questions) will be useful for both regression analysis and machine learning. However, machine learning would likely lead to more personalized and meaningful scoring of criminal defendants because of the ability of machine learning techniques to “zoom in” on the unique details of each individual case.
This work is licensed under a Creative Commons Attribution 4.0 International License.