Hiding in Plain Sight: An Empirical Study of Prosecutorial Bias in AI Legal Analysis
PDF

Keywords

AI
Generative AI
bias
prosecutorial bias
AI bias

How to Cite

Pulvino, R., Sutton, D., & Naddeo, J. (2026). Hiding in Plain Sight: An Empirical Study of Prosecutorial Bias in AI Legal Analysis. Science and Technology Law Review, 27(1). https://doi.org/10.52214/stlr.v27i1.14543

Abstract

Artificial intelligence is beginning to shape the criminal justice system, but scholars have largely overlooked its impact on prosecutors—the system’s most powerful actors. This gap is significant because large language models are particularly well-suited to legal work, where analysis and writing are central. Companies now market AI tools that prepare “a first draft of potential charges” and legal memos, promising to “turn 1 day” of work “into 1 hour.” With heavy caseloads and few guardrails, prosecutors may be quick to adopt them, and some offices already report using AI to draft charging documents and analyze evidence.

We conducted a large-scale experiment examining how AI might influence prosecutorial decision-making. Using real police reports from common low-level offenses, we asked a widely used ChatGPT model to generate over 140,000 legal memos. While we anticipated signs of racial bias, we discovered a more foundational issue: the model exhibits a prosecutorial default bias. It systematically recommends prosecution–even when prompted from a defense perspective, confronted with minimal evidence, or presented with clear constitutional violations.

These findings raise urgent questions about the integration of AI into legal workflows. We explore the role of automation bias—the pattern, even among highly trained professionals, to defer to algorithmic suggestions—and how it may anchor human decision-making toward harsher outcomes. We also examine how systems that fail to recognize Fourth Amendment violations risk eroding constitutional protections in ways that efficiency gains alone cannot justify. Finally, we argue that prosecution-oriented AI tools raise democratic concerns: America’s prosecutors are accountable to voters and local values, but AI systems may transfer key aspects of criminal justice policymaking from elected officials who answer to their communities to private companies optimizing for different objectives. We conclude by identifying areas for further research, and suggest evaluation protocols, enhanced professional responsibility standards, and regulatory safeguards—particularly relevant given recent federal mandates for “unbiased” and ideologically neutral AI—to help ensure that AI tools serve justice rather than subvert it.

https://doi.org/10.52214/stlr.v27i1.14543
PDF
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2026 Rory Pulvino, Dan Sutton, J.J. Naddeo