Science and Technology Law Review https://journals.library.columbia.edu/index.php/stlr <p>The Columbia Science and Technology Law Review (STLR) deals with the exciting legal issues surrounding science and technology, including patents, the Internet, biotechnology, nanotechnology, telecommunications, and the implications of technological advances on traditional legal fields such as contracts, evidence, and tax. Recent articles have discussed the practice of paying to delay the entrance of generic pharmaceuticals, proposals for expanding legal technologies focused on online dispute resolution, the rise of facial recognition technology in society and in law enforcement, the proliferation of artificial intelligence and its impact on intellectual property, the spread of misinformation as a consequence of poor data privacy protections, and protecting access to the internet in times of armed conflict.</p> Columbia University Libraries en-US Science and Technology Law Review 1938-0976 Hiding in Plain Sight: An Empirical Study of Prosecutorial Bias in AI Legal Analysis https://journals.library.columbia.edu/index.php/stlr/article/view/14543 <p>Artificial intelligence is beginning to shape the criminal justice system, but scholars have largely overlooked its impact on prosecutors—the system’s most powerful actors. This gap is significant because large language models are particularly well-suited to legal work, where analysis and writing are central. Companies now market AI tools that prepare “a first draft of potential charges” and legal memos, promising to “turn 1 day” of work “into 1 hour.” With heavy caseloads and few guardrails, prosecutors may be quick to adopt them, and some offices already report using AI to draft charging documents and analyze evidence.</p> <p>We conducted a large-scale experiment examining how AI might influence prosecutorial decision-making. Using real police reports from common low-level offenses, we asked a widely used ChatGPT model to generate over 140,000 legal memos. While we anticipated signs of racial bias, we discovered a more foundational issue: the model exhibits a prosecutorial default bias. It systematically recommends prosecution–even when prompted from a defense perspective, confronted with minimal evidence, or presented with clear constitutional violations.</p> <p>These findings raise urgent questions about the integration of AI into legal workflows. We explore the role of automation bias—the pattern, even among highly trained professionals, to defer to algorithmic suggestions—and how it may anchor human decision-making toward harsher outcomes. We also examine how systems that fail to recognize Fourth Amendment violations risk eroding constitutional protections in ways that efficiency gains alone cannot justify. Finally, we argue that prosecution-oriented AI tools raise democratic concerns: America’s prosecutors are accountable to voters and local values, but AI systems may transfer key aspects of criminal justice policymaking from elected officials who answer to their communities to private companies optimizing for different objectives. We conclude by identifying areas for further research, and suggest evaluation protocols, enhanced professional responsibility standards, and regulatory safeguards—particularly relevant given recent federal mandates for “unbiased” and ideologically neutral AI—to help ensure that AI tools serve justice rather than subvert it.</p> Rory Pulvino Dan Sutton J.J. Naddeo Copyright (c) 2026 Rory Pulvino, Dan Sutton, J.J. Naddeo https://creativecommons.org/licenses/by/4.0 2026-01-14 2026-01-14 27 1 10.52214/stlr.v27i1.14543 Pharmaceutical Mergers: Do We Have The Right Cure? https://journals.library.columbia.edu/index.php/stlr/article/view/14544 <p>Few federal agencies wield tools more powerful than the Federal Trade Commission’s authority to review—and deny—proposed mergers between companies. This authority is powerful for a reason: Large mergers can be uniquely harmful to the United States economy, potentially reducing competition, undercutting consumer choice, and inflating prices.&nbsp;</p> <p>The pharmaceutical industry is particularly sensitive to merger harms, given the limited number of competitors and the inelasticity of demand for prescription drugs. As a result, when pharmaceutical companies seek to merge, the FTC often requires that one of the companies divest ownership of certain drugs not yet on the market—so-called “pipeline” drugs––to a third party.</p> <p>FTC evaluations deem the pipeline divestiture program a complete success. But does it really work? As a client once said when asked this question, “It depends on what you mean by ‘it’ and ‘work.’” In prior research, the FTC determined the success of a divestiture based solely on whether it occurred––rather than whether it meaningfully preserved competition post-merger. Our first-of-its-kind study reveals that pipeline divestitures have not in fact worked. Using conservative measures, our analysis shows that 81% of divested pipeline products fail to attain even a 1% share of their relevant markets.&nbsp;</p> <p>But all is not lost: With a few key changes, drug divestiture can indeed achieve its intended effects. We recommend that the FTC require either a “crown jewel divestiture” (selling the on-market product, not the pipeline product) or a “skin in the game divestiture” (if the pipeline product fails, the company divests its on-market product).</p> Robin Feldman Gideon Schor Yaniv Konchitchki Tanziuzzaman Sakib Copyright (c) 2026 Robin Feldman, Gideon Schor, Yaniv Konchitchki, Tanziuzzaman Sakib https://creativecommons.org/licenses/by/4.0 2026-01-14 2026-01-14 27 1 10.52214/stlr.v27i1.14544 Forget Me Not? Machine Unlearning's Implication for Privacy Law https://journals.library.columbia.edu/index.php/stlr/article/view/14547 <p>Generative AI systems are increasingly relied on and are already actively reshaping how we think about privacy and data protection law. Models ingest and process vast amounts of personal and sensitive data, challenging assurances of compliance with legal frameworks like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) with increasing intensity. Machine unlearning is an emerging tool in practitioners’ attempts to address these challenges: the act of selectively removing or suppressing specific data, such as personal data that a data subject requests be deleted, from AI models as means of complying with legal obligations or policy goals. This Article’s much-needed analysis of unlearning’s technicalities and uses builds on recent critical scholarship that examines unlearning’s limitations at the technical and policy level. It delves deeper into machine unlearning’s implications for privacy and data protection law by situating it within privacy law’s broader ecosystem and proposing actionable pathways for integrating unlearning into enforcement and policy. Specifically, this Article evaluates whether privacy laws’ legal, remedial, and normative aspirations can be reconciled with the technical realities of machine unlearning in generative AI systems. It also contributes to the privacy profession by proposing a framework for integrating machine unlearning into broader privacy-preserving interventions<strong>. </strong>In doing so, the Article positions machine unlearning as both a vital new tool as well as a site of contestation in the evolving landscape of privacy and AI governance while providing a forward-looking roadmap for aligning machine unlearning with privacy law’s goals.</p> Jevan Hutson Cedric Whitney Jay T. Conrad Copyright (c) 2026 Jevan Hutson, Cedric Whitney, Jay T. Conrad https://creativecommons.org/licenses/by/4.0 2026-01-14 2026-01-14 27 1 10.52214/stlr.v27i1.14547 Breakthrough or Breakaway Innovation? https://journals.library.columbia.edu/index.php/stlr/article/view/14548 <p>This article argues that expedited regulatory review programs for innovative products, like the Food and Drug Administration’s (FDA) Breakthrough Devices Program (BDP), should not be paired with immunity from tort liability for those products and their developers. Doing so both limits the ability of regulators to manage the risks of new products while simultaneously undermining incentives for their developers to adopt internal systems that address those risks. In non-emergency contexts, expedited review and liability immunity together could elevate innovation as a policy goal in the short-term above the more fundamental principles of safety and effectiveness for those new products over time and across populations. At minimum, if these two policies are deployed at once, they should only occur in the context of heightened regulatory supervision over those products both during and after review, backed up by a strong legal mandate for the regulator and adequate resources to conduct supervision.</p> <p>To make this argument, the article provides an in-depth analysis of the FDA’s Breakthrough Devices Program, an initiative from the 21st Century Cures Act for promoting innovation in medical devices by reducing scrutiny of their safety and effectiveness. The analysis applies doctrinal and empirical approaches to explore the Program’s legal foundations, current operations, and implications of liability preemption for patients and device manufacturers. Some patients have already been harmed by breakthrough devices and, while the Cures Act leaves some legal uncertainty, doctrinal analysis shows those patients appear likely to have limited remedies in tort law against some of these devices due to federal liability preemption. The article argues for loosening the current federal preemption of state-level tort liability for medical devices that were approved through the BDP, paired with greater regulatory supervision by the FDA both during and after the Program. While innovation remains an important policy goal, it should never surpass safety as a core regulatory imperative for novel products.</p> Walter G. Johnson Copyright (c) 2026 Walter G. Johnson https://creativecommons.org/licenses/by/4.0 2026-01-14 2026-01-14 27 1 10.52214/stlr.v27i1.14548