https://journals.library.columbia.edu/index.php/stlr/issue/feedScience and Technology Law Review2024-06-24T15:58:56+00:00Ashley Penningtonap4248@columbia.eduOpen Journal Systems<p>The Columbia Science and Technology Law Review (STLR) deals with the exciting legal issues surrounding science and technology, including patents, the Internet, biotechnology, nanotechnology, telecommunications, and the implications of technological advances on traditional legal fields such as contracts, evidence, and tax. Recent articles have discussed the practice of paying to delay the entrance of generic pharmaceuticals, proposals for expanding legal technologies focused on online dispute resolution, the rise of facial recognition technology in society and in law enforcement, the proliferation of artificial intelligence and its impact on intellectual property, the spread of misinformation as a consequence of poor data privacy protections, and protecting access to the internet in times of armed conflict.</p>https://journals.library.columbia.edu/index.php/stlr/article/view/12761How Generative AI Turns Copyright Law Upside Down2024-06-05T00:13:41+00:00Mark Lemleymlemley@law.stanford.edu<p><em>While courts are litigating many copyright issues involving generative AI, from who owns AI-generated works to the fair use of training to infringement by AI outputs, the most fundamental changes generative AI will bring to copyright law don’t fit in any of those categories. The new model of creativity, generative AI puts considerable strain on copyright’s two most fundamental legal doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Increasingly, creativity will be lodged in asking the right questions, not in creating the answers. Asking questions may sometimes be creative, but the AI does the bulk of the work that copyright traditionally exists to reward, and that work will not be protected. That inverts what copyright law now prizes. And because asking the questions will be the basis for copyrightability, similarity of expression in the answers will no longer be of much use in proving the copying of the questions. That means we may need to throw out our test for infringement, or at least apply it in fundamentally different ways.</em></p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Prof. Mark Lemleyhttps://journals.library.columbia.edu/index.php/stlr/article/view/12762Focusing On Fine-Tuning2024-06-05T00:21:34+00:00Paul Ohmohm@law.georgetown.edu<p><em>Those who design and deploy generative AI models, such as Large Language Models like GPT-4 or image diffusion models like Stable Diffusion, can shape model behavior in four distinct stages: pretraining, fine-tuning, in-context learning, and input and output filtering. The four stages differ among many dimensions, including cost, access, and persistence of change. Pretraining is always very expensive and in-context learning is nearly costless. Pretraining and fine-tuning change the model in a more persistent manner, while in-context learning and filters make less durable alterations. These are but two of many such distinctions reviewed in this Essay.</em></p> <p><em>Legal scholars, policymakers, and judges need to understand the differences between the four stages as they try to shape and direct what these models do. Although legal and policy interventions can (and probably will) occur during all four stages, many will best be directed at the fine-tuning stage. Fine-tuning will often represent the best balance between power, precision, and disruption of the approaches.</em></p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Paul Ohmhttps://journals.library.columbia.edu/index.php/stlr/article/view/12763A Products Liability Framework for AI2024-06-05T00:26:41+00:00Catherine M. Sharkeycatherine.sharkey@nyu.edu<p>A products liability framework, drawing inspiration from the regulation of FDA-approved medical products—which includes federal regulation as well as products liability—holds great promise for tackling many of the challenges AI poses. Notwithstanding the new challenges that sophisticated AI technologies pose, products liability provides a conceptual framework capable of responding to the learning and iterative aspects of these technologies. Moreover, this framework provides a robust model of the feedback loop between tort liability and regulation.<br>The regulation of medical products provides an instructive point of departure. The FDA has recognized the need to revise its traditional paradigm for medical device regulation to fit adaptive AI/ML technologies, which enable continuous improvements and modifications to devices based on information gathered during use. AI/ML technologies should hasten an even more significant regulatory paradigm shift at the FDA away from a model that puts most of its emphasis on (and resources into) ex ante premarket approval to one that highlights ongoing postmarket surveillance. As such a model takes form, products liability should continue to play a significant information-production and deterrence role, especially during the transition period before a new ex post regulatory framework is established.</p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Catherine M. Sharkeyhttps://journals.library.columbia.edu/index.php/stlr/article/view/12764Do Cases Generate Bad AI Law?2024-06-05T00:46:49+00:00Alicia Solow-Niedermanalicia.solowniederman@law.gwu.edu<p><em>There’s an AI governance problem, but it’s not (just) the one you think. The problem is that our judicial system is already regulating the deployment of AI systems—yet we are not coding what is happening in the courts as privately driven AI regulation. That’s a mistake. AI lawsuits here and now are determining who gets to seek redress for AI injuries; when and where emerging claims are resolved; what is understood as a cognizable AI harm (and what is not), and why that is so. </em></p> <p><em>This Essay exposes how our judicial system is regulating AI today and critically assesses the governance stakes. When we do not adequately recognize how the generative AI cases being decided by today’s judges are already operating as a type of AI regulation, we fail to consider which emerging tendencies of adjudication about AI are likely to make good or bad AI law. For instance, litigation may do good agenda-setting and deliberative work as well as surface important information about the operation of private AI systems. But adjudication of AI issues can be bad too, given the risk of overgeneralization from particularized facts; the potential for too much homogeneity in the location of lawsuits and the kinds of litigants; and the existence of fundamental tensions between social concerns and current legal precedents. </em></p> <p><em>If we overlook these dynamics, we risk missing a vital lesson: AI governance requires better accounting for the interactive relationship between regulation of AI through the judicial system and more traditional public regulation of AI. Shifting our perspective creates space to consider new AI governance possibilities. For instance, litigation incentives (such as motivations for bringing a lawsuit or motivations to settle) or the types of remedies available may open up or close down further regulatory development. This shift in perspective also allows us to see how considerations that on their face have nothing to do with AI—such as access to justice measures and the role of judicial minimalism—in fact shape the path of AI regulation through the courts. Today’s AI lawsuits provide an early opportunity to expand AI governance toolkits and to understand AI adjudication and public regulation as complementary regulatory approaches. We should not throw away our shot.</em></p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Professor Alicia Solow-Niedermanhttps://journals.library.columbia.edu/index.php/stlr/article/view/12765Fairness & Privacy in an Age of Generative AI2024-06-05T00:57:35+00:00Alice XiangAlice.Xiang@sony.com<p><em>Generative AI technologies have made tremendous strides recently and have captured the public’s imagination with their ability to mimic what was previously thought to be a fundamentally human capability: creativity. While such technologies hold great promise to augment human creativity and automate tedious processes, they also carry risks that stem from their development process. In particular, the reliance of foundation models on vast amounts of typically uncurated, often web-scraped training data has led to concerns around fairness and privacy. Algorithmic fairness in this context encompasses concerns around potential biases that can be learned by models due to skews in their training data and then reflected in their generated outputs. For example, without intervention, image generation models are more likely to generate images of lighter skin tone male individuals for professional occupations and images of darker skin tone female individuals for working class occupations. This further raises questions around whether there should be legal protections from such pernicious stereotypical representations. Privacy is also a concern as generative AI models can ingest large amounts of personal and biometric information in the training process, including face and body biometrics for image generation and voice biometrics for speech generation. This Essay will discuss the types of fairness and privacy concerns that generative AI raises and the existing landscape of legal protections under anti-discrimination law and privacy law to address these concerns. This Essay argues that the proliferation of generative AI raises challenging and novel questions around (i) what protections should be offered around the training data used to develop such systems and (ii) whether representational harms should be protected against in an age of AI-generated content. </em></p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Alice Xianghttps://journals.library.columbia.edu/index.php/stlr/article/view/12766Beyond Algorithmic Disclosure for Generative AI2024-06-05T01:02:25+00:00Christopher Yoocsyoo@law.upenn.edu<p><em>One of the most commonly recommended policy interventions with respect to algorithms in general and artificial intelligence (“AI”) systems in particular is the need for greater transparency, often focusing on the disclosure of the variables employed by the algorithm and the weights given to those variables. This Essay argues that any meaningful transparency regime must provide information on other critical dimensions as well. For example, any transparency regime must also include key information about the data on which the algorithm was trained, including its source, scope, quality, and inner correlations, subject to constraints imposed by copyright, privacy, and cybersecurity law. Disclosures about pre-release testing also play a critical role in understanding an AI system’s robustness and its susceptibility to specification gaming. Finally, the fact that AI, like all complex systems, tends to exhibit emergent phenomena, such as proxy discrimination, interactions among multiple agents, the impact of adverse environments, and the well-known tendency of generative AI to hallucinate, makes ongoing post-release evaluation a critical component of any system of AI transparency. </em></p>2024-06-05T00:00:00+00:00Copyright (c) 2024 Professor Christopher Yoo