AI, shorthand for artificial intelligence, is a term that has been popping up with more and more frequency in the media and in conversations. In simple terms, AI is an idea that entails computers simulating human intelligence, meaning that it can “learn” [1]. The same way a kindergartener may be given a worksheet full of addition problems to learn the concept of adding, an AI may be given various images of bicycles so that it can learn to identify them.

The base of AI are “neural networks,” which are layers of “nodes” that are connected through channels; there is an input layer, several hidden layers, and an output layer. The channels and nodes have numbers assigned to them, called “weights” and “biases” respectively, which are used to make calculations that allow the output layer to give us a prediction; this process is referred to as “forward propagation.” However, AIs make mistakes, especially when they are first being trained. Programmers can tell the AI that the prediction is wrong, which is when “backpropagation” happens. The information is sent back from the output layer to the input layer and  the weights are readjusted to make more accurate predictions in the future [2][3]. If we take the kindergarten example from before, backpropagation is like correcting their mistakes so that they adjust their procedure in order to make the right calculations next time.

With this branch of computer science being expanded upon, new uses for AI such as predictive policing algorithms—the use of AI in police departments to predict where a crime might happen or who might be a criminal or victim— are being brought to the public eye [7]. The thing with AI is that to create it, there must be pre-existing data, which for predictive policing is pulled from police databases. But is it right to use this data when historically, a disproportionate number of black and latino people have been incarcerated due to racial bias [3][5][7]? 

There have been several attempts to develop AI algorithms to help police departments, but the recurring problem has been the faulty data. Whether the algorithm is person based or place based, the same biases present in our society and justice system are spit back at us. This is made clear with the “runaway feedback loops:” a problem where place based AI keeps sending police to the same neighbourhoods despite the actual crime rates because past biased police data points to these as being predisposed to crime  [6][7]. These loops send police officers to certain areas, and since these places are receiving more police patrolling than other areas because of the algorithm, more crimes are registered in these places, therefore making it decide to keep sending police officers to these areas.

A paper from NYU studied this controversy more in depth and pointed out that the unethical justice systems currently in place not only contribute to biased data, but also “[support] a wider culture of suspect police practices and ongoing data manipulation.” They also found that two types of bias are reinforced with these types of programs: societal and systemic, where certain races, neighbourhoods, and crimes are hyper focused on, and a bias of overlooking white collar crimes, a leniency for certain races and neighbourhoods  [7].

With the negative traction that predictive policing is drawing, programmers have started to lean more towards transparency and sharing information about their source code and data with the public. And with pressure from the Brennan Center, a nonpartisan law institute, the New York state trial court gave a court order for the New York City Police Department to “produce historical output data from the existing predictive policing systems,” correspondence with Palantir, an NYPD partner in predictive policing system, and notes from their Assistant Commissioner of Data Analytics, who developed the predictive policing algorithms currently used [7] [8]. The court ordered this based on the “public's inherent right to know” and the intent to “expose government abuses and hold it accountable” [7]. 

From this, there is yet to see what the data provided from the NYPD reveals, but this brings up the point that as we move into this new age of technology, we must proceed with caution and keep dialogue open. How should we proceed with AI integration into the justice system, if at all? What should be considered “ethical” when creating programs like these, and why? What data should be used to create future AI, and where should it come from?  With this in mind, we are left with many questions, much room for discourse, and hope for the future.

 

 

References:

  1. Frankenfield, J. (2021, July 6). How Artificial Intelligence Works. Investopedia. http://www.investopedia.com/terms/a/artificial-intelligence-ai.asp. 
  2. Neural Networks - What are they and why do they matter? SAS. (n.d.). http://www.sas.com/en_us/insights/analytics/neural-networks.html. 
  3. Chen, J. (2021, May 19). Neural Network Definition. Investopedia. http://www.investopedia.com/terms/n/neuralnetwork.asp. 
  4. Dellinger, A. J. (2020, June 24). A twisted project that tried to predict criminals from a photo has come to an end. Mic. http://www.mic.com/p/a-twisted-project-that-tried-to-predict-criminals-from-a-photo-has-come-to-end-27621049. 
  5. BBC. (2020, June 24). Facial recognition to 'predict criminals' sparks row over AI bias. BBC News. http://www.bbc.com/news/technology-53165286. 
  6. Nellis, A., Mistrett, M., & Fettig, A. (2019, January 10). The Color of Justice: Racial and Ethnic Disparity in State Prisons. The Sentencing Project. http://www.sentencingproject.org/publications/color-of-justice-racial-and-ethnic-disparity-in-state-prisons/. 
  7.  Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. NYU Law Review. 
  8. Levinson-Waldman, R., & Posey, E. (2018, January 26). Court: Public deserves to know how nypd uses predictive policing software. Brennan Center for Justice. https://www.brennancenter.org/our-work/analysis-opinion/court-public-deserves-know-how-nypd-uses-predictive-policing-software.