Science and Technology Law Review
https://journals.library.columbia.edu/index.php/stlr
<p>The Columbia Science and Technology Law Review (STLR) deals with the exciting legal issues surrounding science and technology, including patents, the Internet, biotechnology, nanotechnology, telecommunications, and the implications of technological advances on traditional legal fields such as contracts, evidence, and tax. Recent articles have discussed the practice of paying to delay the entrance of generic pharmaceuticals, proposals for expanding legal technologies focused on online dispute resolution, the rise of facial recognition technology in society and in law enforcement, the proliferation of artificial intelligence and its impact on intellectual property, the spread of misinformation as a consequence of poor data privacy protections, and protecting access to the internet in times of armed conflict.</p>en-USap4248@columbia.edu (Ashley Pennington)ap4248@columbia.edu (Ashley Pennington)Fri, 31 Jan 2025 19:12:36 +0000OJS 3.3.0.10http://blogs.law.harvard.edu/tech/rss60Data Infrastructure as Court Architecture
https://journals.library.columbia.edu/index.php/stlr/article/view/13323
<p><span style="font-weight: 400;">Whether courts like it or not, digital legal data has become an important part of both litigation and justice administration. Constitutionally protected as public records, court data and court-adjacent data must be made transparent and accessible to the general public. However, alongside considerations of how to make court data accessible externally, so must we also consider how court data is situated internally within courts. Conceptualizing data infrastructure as court architecture reframes the importance of court data to better align with its current utility in courts, while privileging the very real structural issues that courts must contend with to ensure the continued health of data systems. This Article considers the usefulness of data in the current terrain of law and justice, evaluates the ecosystem of data products currently at play in all levels of courts, and offers concrete pathways to data infrastructure development through Open Knowledge Networks.</span></p>Kat Albrecht
Copyright (c) 2025 Ashley Pennington
https://creativecommons.org/licenses/by/4.0
https://journals.library.columbia.edu/index.php/stlr/article/view/13323Fri, 31 Jan 2025 00:00:00 +0000Certifying Legal AI Assistants for Unrepresented Litigants: A Global Survey of Access to Civil Justice, Unauthorized Practice of Law, and AI
https://journals.library.columbia.edu/index.php/stlr/article/view/13336
<p>The global integration of artificial intelligence (AI) into legal services has created a critical need for clarity regarding unauthorized practice of law (UPL) rules. Traditionally, UPL rules prohibited unlicensed individuals from engaging in activities legally reserved for qualified attorneys, including, in some jurisdictions, offering legal advice, interpreting laws, representing clients in court, or drafting legal documents. Now that some AI systems can perform functions that practice of law regulating authorities have traditionally reserved for licensed attorneys, a framework is needed to certify the use of legal AI assistants by unrepresented litigants.</p> <p>Ensuring the accuracy of information provided by legal AI assistants for unrepresented litigants benefits the entire legal community, including attorneys, by promoting stricter standards and higher acceptance thresholds. We examine the perspectives of several primary stakeholders in certifying legal AI assistants, including unrepresented litigants, practice of law regulating authorities, judiciaries, the legislature, the legal aid community, and the legal tech community.</p> <p>We conduct a detailed survey of access to justice, AI, and UPL in various international jurisdictions, including Argentina, Australia, Brazil, Canada, China, the European Union, Germany, India, New Zealand, Nigeria, Singapore, the United Kingdom, and the United States. In each of these jurisdictions, we explore how UPL is currently managed in the context of legal AI use by unrepresented litigants. We also include a 50-state and 6-territory survey for the United States on what each Bar Association and Judiciary is doing to regulate legal AI use by unrepresented litigants.</p> <p>In light of this survey, we propose that practice of law regulating authorities add certified legal AI assistants to their lists of UPL exemptions so that such assistants can provide specific and useful legal information, guidance, and advice to unrepresented litigants. We propose a capability-based framework for certifying legal AI assistants for unrepresented litigants. This is intended as a harmonized global proposal, designed for local implementation by each jurisdiction’s practice of law regulating authority, with the flexibility to address individual jurisdictional nuances.</p> <p> Unrepresented litigants are already using AI chatbots for help in legal proceedings, sometimes to their detriment. Our proposal aims to allow unrepresented litigants to use legal AI assistants that have been verified for accuracy. This framework addresses the key justification for UPL restrictions—the risk of incorrect legal guidance—by basing the certification of individual capabilities on their accuracy when tested on public benchmark datasets. Legal AI assistants are added to lists of UPL exemptions under this approach if their accuracy meets or exceeds a certification threshold when tested on these public benchmark datasets. The jurisdiction’s practice of law regulating authority would set the certification threshold or, as we suggest, a third-party certifying authority delegated to perform this task. While many public benchmark datasets are required under this framework, the legal AI community is rapidly developing such datasets.</p> <p>To enable AI to enhance access to justice for unrepresented litigants globally, practice of law regulating authorities in each jurisdiction must choose to exempt certified legal AI systems for unrepresented litigants from unauthorized practice of law regulations.</p>Mia Bonardi, Dr. L. Karl Branting
Copyright (c) 2025 Ashley Pennington
https://creativecommons.org/licenses/by/4.0
https://journals.library.columbia.edu/index.php/stlr/article/view/13336Fri, 31 Jan 2025 00:00:00 +0000Privacy as a Matter of Public Health
https://journals.library.columbia.edu/index.php/stlr/article/view/13337
<p>This Article examines the striking parallels between contemporary privacy challenges and past public health crises involving tobacco, processed foods, and opioids. Despite surging state and federal privacy legislation, many of these new privacy law and policy activities follow familiar patterns: an emphasis on individual choice, narrowly defined rights and remedies, and a lack of holistic accounting of how privacy incursions affect society as a whole. We argue instead for a salutary shift in privacy law and advocacy: understanding privacy through the lens of public health.</p> <p> By tracing systemic factors that allowed industries to repeatedly subvert public welfare—from information asymmetries and regulatory capture to narratives of individual responsibility—we explore a fundamental rethinking of privacy protection. Our analysis of case studies reveals remarkable similarities between public health challenges of the past half-century or so and the ongoing consumer privacy crisis. We explore how public health frameworks emphasizing preventative policies and reshaping social norms around individual choices could inform privacy advocacy. To do so, we examine a spectrum of proposals to align privacy with public health, from adopting public health insights to provocatively reframing privacy violations as an epidemic threatening basic wellbeing.</p> <p>This Article offers a novel framework for addressing the current privacy crisis, drawing on the rich history and strategies of public health. In reframing privacy violations as a societal health issue rather than a matter of consumer choice, we see new avenues for effective regulation and protection. Our proposed approach not only aligns with successful public health interventions of the past but also provides a more holistic and proactive stance towards safeguarding privacy in the digital age.</p>Yafit Lev-Aretz, Aileen Nielsen
Copyright (c) 2025 Ashley Pennington
https://creativecommons.org/licenses/by/4.0
https://journals.library.columbia.edu/index.php/stlr/article/view/13337Fri, 31 Jan 2025 00:00:00 +0000Location Is All You Need: Copyright Extraterritoriality and Where to Train Your AI
https://journals.library.columbia.edu/index.php/stlr/article/view/13338
<p>The development of artificial intelligence (“AI”) models requires vast quantities of data, which will often include copyrighted materials. The reproduction of copyrighted materials in the course of training AI models will infringe on copyright, unless there are applicable exceptions and limitations exempting such activities. There is so far considerable divergence between jurisdictions, including between the United States, EU, U.K., Japan, Singapore, Australia, India, Israel, and many more countries, in this regard. In the absence of international harmonization, there is therefore a high likelihood that the same type of training activity would be considered copyright infringement in some countries but not in others.</p> <p>The AI community is not blind to that risk. If copyright law restricts the development and deployment of AI, developers may decide to relocate their operations elsewhere, where the reproduction of training data is clearly not infringing. This Article concludes that there is a loophole in the international copyright system, as it currently stands, that would permit large-scale copying of training data in one country where this activity is not infringing. Once the training is done and the model is complete, developers could then make the model available to customers in other countries, even if the same training activities would have been infringing if they had occurred there. Because copyright laws are territorial in nature, by default they can only restrict infringing conduct occurring in their respective countries. From that point of view for AI developers, location is indeed all you need.</p> <p>The EU has become the first to respond to this problem by retroactively extending their text and data mining exception extraterritorially to training activities occurring in non-EU countries, once the completed AI model is placed on the EU market. While such an extraterritorial application benefits rightholders and closes the loophole now present, it makes the situation significantly more complex for developers. If other regulators decide to follow the same path as the EU, which previously happened in the data privacy context, then developers would be facing multiple, conflicting copyright laws targeting the same underlying activity. This could significantly complicate the development process for AI and potentially undermine the AI industry. This Article critically discusses these and related issues, and whether an extraterritorial application of copyright laws is compatible with territoriality norms that are supposed to respect foreign sovereignty. It also explores, in light of these difficulties, whether we should instead shift focus from regulating the inputs (i.e., the data used to train AI models) to regulating the outputs (i.e., the AI-generated content itself). Indeed, to the extent that the transnational data loophole cannot be closed without infringing upon foreign sovereignty, we may need to look at other regulatory means instead.</p> <p>The Article also suggests that we should consider model training and copyright infringement as a product-by-process problem, which calls for a comparison with how patent law solved similar extraterritoriality issues. Several decades ago, international patent treaties harmonized the extent to which patent laws can be applied extraterritorially to reach imported products derived from foreign manufacturing processes. If regulators wish to extend their copyright laws’ extraterritoriality to close the loophole that exists for training activities in the context of AI, and to do so in a way that is aligned with copyright territoriality, there may be a need to similarly revise international copyright treaties. This Article, therefore, urgently calls for a similarly coordinated international effort in copyright law, which balances the interests of rightholders with the technical, regulatory, and economic realities faced by developers. How we resolve these issues could make or break the future of AI. If we cannot find a way to reconcile the interests of rightholders and AI stakeholders, the world may be left with a segregated and fragmented AI landscape, one in which there can only be losers and no winners.</p>Mattias Rättzén
Copyright (c) 2025 Ashley Pennington
https://creativecommons.org/licenses/by/4.0
https://journals.library.columbia.edu/index.php/stlr/article/view/13338Fri, 31 Jan 2025 00:00:00 +0000An Anatomy of Algorithm Aversion
https://journals.library.columbia.edu/index.php/stlr/article/view/13339
<p>People are said to show “algorithm aversion” when they prefer human forecasters or decision-makers to algorithms, even though algorithms generally outperform people (in forecasting accuracy and/or optimal decision-making in furtherance of a specified goal). Algorithm aversion also has “softer” forms, as when people prefer human forecasters or decision-makers to algorithms in the abstract, without having clear evidence about comparative performance. Algorithm aversion has strong implications for policy and law; it suggests that those who seek to use algorithms, such as officials in federal agencies, might face serious public resistance. Algorithm aversion is a product of diverse mechanisms, including (1) a desire for agency; (2) a negative moral or emotional reaction to judgment by algorithms; (3) a belief that certain human experts have unique knowledge, unlikely to be held or used by algorithms; (4) ignorance about why algorithms perform well; and (5) asymmetrical forgiveness, or a larger negative reaction to algorithmic error than to human error. An understanding of the various mechanisms provides some clues about how to overcome algorithm aversion, and also of its boundary conditions. These clues bear on the numerous decisions in law and policy, including those of federal agencies (such as the Department of Homeland Security and the Internal Revenue Service) and those involved in the criminal justice system (such as those thinking about using algorithms for bail decisions).</p>Cass R. Sunstein, Jared H. Gaffe
Copyright (c) 2025 Ashley Pennington
https://creativecommons.org/licenses/by/4.0
https://journals.library.columbia.edu/index.php/stlr/article/view/13339Fri, 31 Jan 2025 00:00:00 +0000