https://journals.library.columbia.edu/index.php/stlr/issue/feed Science and Technology Law Review 2025-05-23T09:05:43+00:00 Catherine Kirby cak2247@columbia.edu Open Journal Systems <p>The Columbia Science and Technology Law Review (STLR) deals with the exciting legal issues surrounding science and technology, including patents, the Internet, biotechnology, nanotechnology, telecommunications, and the implications of technological advances on traditional legal fields such as contracts, evidence, and tax. Recent articles have discussed the practice of paying to delay the entrance of generic pharmaceuticals, proposals for expanding legal technologies focused on online dispute resolution, the rise of facial recognition technology in society and in law enforcement, the proliferation of artificial intelligence and its impact on intellectual property, the spread of misinformation as a consequence of poor data privacy protections, and protecting access to the internet in times of armed conflict.</p> https://journals.library.columbia.edu/index.php/stlr/article/view/13886 Unsticking Litigation Science 2025-05-23T08:24:38+00:00 Edith Beerdsen edith.beerdsen@temple.edu <p>Litigation science is increasingly out of step with academic, knowledge-producing science. Research practices in the social sciences have changed dramatically in the past fifteen years, in response to a knowledge crisis now popularly known as the “Replication Crisis.” The Replication Crisis caused an evolution in scientists’ understanding of what it takes to create reliable science and radically altered the way social science is conducted today. Central to these reforms is a focus on the elimination of “Analytical Flexibility”—the flexibility a researcher has to alter a research protocol along the way—in recognition of the fact that Analytical Flexibility has a propensity to lead to research results that are not only unreliable but also unreliable in undetectable ways.</p> <p>The Replication Crisis represents a paradigm shift that has not yet been recognized by the legal community. In this symposium paper, I describe how litigation science and academic science are currently on divergent paths, and argue that it is critical for courts and litigators to start engaging with recent progress in research methodology. In the academic sciences, modern research practices such as preregistration are increasingly becoming routine and expected by journals, peer reviewers, and funders. Meanwhile, testifying experts retained in connection with litigation essentially proceed as they always have, and disclosure requirements have remained unchanged.</p> <p>Litigation science is at risk of becoming a quaint, historically shaped discipline that bears scant relationship to its academic, knowledge-producing cousin. If we do not reform how litigation science is produced and presented, it will increasingly be seen as incapable of producing information that can usefully inform relevant issues in litigation.</p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Edith Beerdsen https://journals.library.columbia.edu/index.php/stlr/article/view/13887 Expert Histories 2025-05-23T08:30:50+00:00 Edward Cheng edward.cheng@vanderbilt.edu <p>Attorneys and experts often worry that being excluded in a case will have negative ramifications on an expert’s future admissibility. This symposium contribution seeks to highlight this phenomenon, as well as evaluate its normative desirability and empirical validity. Promoting the use of expert histories, for example, may create long-term incentives that help control adversarial experts. The article further develops a statistical “frailty” model to analyze a dataset of expert admissibility rulings collected and provided by Expert Profiler, LLC. The results suggest that recent exclusions may have a robust, if small, negative effect on an expert’s odds of being admitted in a future case.</p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Edward Cheng https://journals.library.columbia.edu/index.php/stlr/article/view/13888 How Experts View the Legal System's Use of Scientific Evidence 2025-05-23T08:43:25+00:00 Shari Seidman Diamond s-diamond@law.northwestern.edu Richard Lempert rlempert@umich.edu <p>Legal scholars and courts frequently write about how scientific evidence is vetted and presented in legal proceedings, but the views of experts themselves have received little attention. Our research aims to fill that gap. This paper reports some of what we learned from a series of surveys we conducted, beginning with a survey in 2016 of scientists who had been elected to membership in the American Academy of Arts and Sciences. Subsequent surveys were directed to subscribers of the journal Science who identified as scientists and engineers and to self-identified experts who advertised their availability as experts to lawyers or appeared in the expert listings on Westlaw. Responses from those surveyed capture how they regard key actors in the legal system (Judges, Jurors, Lawyers, Other Experts) as well as the weaknesses these experts see in how the legal system treats scientists and handles scientific evidence. We also examine the extent to which expert evaluations of these issues are mediated by their experience in testifying in legal proceedings.</p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Shari Seidman Diamond, Richard Lempert https://journals.library.columbia.edu/index.php/stlr/article/view/13889 Overcoming Judicial Innumeracy 2025-05-23T08:48:51+00:00 David Faigman faigmand@uclawsf.edu <p>Lawyers are not known for their proficiency in math and science. Most of us who went to law school reached a point in our math and science studies when we realized that neither medicine nor engineering were likely to be successful career paths. It is these lawyers who become judges. Yet, the United States Supreme Court has increasingly put the burden for deciding complex scientific and technical questions in the hands of judges.</p> <p>This Article explores this trend of putting greater responsibility for deciding scientific and technical issues on judges, particularly in the areas of evidence law, administrative law, and constitutional law. I do not, however, uniformly decry this trend. In many contexts, both as a matter of legal doctrine and as an empirical matter, judges are the appropriate decision makers for scientific and technical questions. The problem is that they, on the whole, are so unqualified for this task.</p> <p>The question, then, is how courts might be better prepared to make informed decisions about scientific and technical questions. I propose a solution that comes from the scientific enterprise itself, peer review. While not a perfect solution, peer review has proved to be the best available option for evaluating the validity and value of scientific research. I explore how a formal procedure of peer review might be employed by courts to provide them with independent assessments of expert reports.</p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Daniel Faigman https://journals.library.columbia.edu/index.php/stlr/article/view/13890 Judicial Approaches to Acknowledged and Unacknowledged AI-Generated Evidence 2025-05-23T08:52:47+00:00 Maura Grossman maura.grossman@uwaterloo.ca Paul Grimm grimm@law.duke.edu <p>Between 2014 and 2024, rapid advancements in computer science ushered in a dramatic new form of technology—Generative AI (“GenAI”). It offered seemingly limitless possibilities for creative applications never before imagined. But it also brought with it a darker side—the ability to create synthetic or “fake” text, images, audio, and audiovisual depictions so realistic that it has become nearly impossible—even for computer scientists—to tell authentic from fake content. Along with this new technology, new terms have been introduced, including “hallucinations” and “deepfakes.” The use of GenAI technology has not been limited to computer scientists and IT professionals. It is readily available on the Internet at little or no cost to anyone with a computer and Internet access. It is no exaggeration to say that GenAI has democratized fraud, and that an ever-increasing amount of content on the Internet is now synthetic or AI-generated. Deepfakes have been used for satire and amusement but also to humiliate and destroy the reputations and careers of persons depicted in the fakes, to spread disinformation, to manipulate elections, and to mislead the public. They will most certainly find their way into the resolution of court cases where judges and juries will face real challenges understanding the operations and output of complex AI systems and distinguishing between what is real and what is not.</p> <p>In this Article, we explore the development of GenAI and the deepfake phenomenon and examine their impact on the resolution of cases in courts. We address the ways in which both known-to-be-AI-generated evidence and suspected deepfake evidence may be offered during trials. We review the research literature regarding the ability of deepfakes to mislead and influence juries, and the challenges with detecting deepfakes that judges, lawyers, and juries composed of laypersons will face. We draw an important distinction between two kinds of AI evidence. The first is “acknowledged AI-generated evidence,” about which there is no dispute that the evidence was created by, or is the product of, an AI system. The second is “unacknowledged AI-generated evidence,” or potential deepfake evidence, where one party claims the evidence is an authentic representation of what actually happened, and the opposing party claims the evidence is a GenAI-fabricated deepfake. We discuss the application of existing rules of evidence that govern admissibility of evidence and how they might be flexibly applied—or slightly modified—to better address what is at issue with known AI-generated evidence. With respect to unacknowledged AI-generated evidence, we explain the challenges associated with using the existing rules of evidence to resolve the question of whether such evidence should be admitted, and the possible prejudice if it is allowed to be seen by the jury. We describe two proposed new rules of evidence that we have urged the Advisory Committee on Evidence Rules to consider regarding the evidentiary challenges presented by acknowledged and unacknowledged AI-generated evidence, and the actions proposed by the Committee to date. We finish with practical steps that judges and lawyers can take to be better prepared to face the challenges presented by this unique form of evidence. </p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Maura Grossman, Paul Grimm https://journals.library.columbia.edu/index.php/stlr/article/view/13891 Juries Judging Science 2025-05-23T08:57:14+00:00 Valerie Hans valerie.hans@cornell.edu <p>Contemporary jury trials often include complex scientific evidence that can be challenging for lay jurors to understand and evaluate. This Article examines the capabilities of jurors to comprehend and apply scientific evidence in both criminal and civil trials. It begins by summarizing existing research on individual and collective jury decision-making competence, describing both the cognitive processes that jurors use to evaluate trial testimony and the contributions of group deliberation. The Article then explores the specific types of scientific evidence that are most challenging for jurors, including scientific research methods, statistical information, and probability estimates. It also examines the influence of factors such as the nature of the presentation of scientific evidence in court and jurors’ pre-existing attitudes toward science. Finally, the Article proposes reforms to jury trials and the presentation of expert scientific testimony. These include active jury reforms such as juror notetaking, question asking, and discussions during trial, improved methods for presenting scientific evidence, and the use of tutorials to enhance juror comprehension. By addressing the challenges jurors face, these reforms aim to improve the accuracy and fairness of jury decisions in cases with scientific evidence.</p> 2025-05-23T00:00:00+00:00 Copyright (c) 2025 Valerie Hans