Beyond Algorithmic Disclosure for Generative AI

How to Cite

Yoo, C. (2024). Beyond Algorithmic Disclosure for Generative AI. Science and Technology Law Review, 25(2).


One of the most commonly recommended policy interventions with respect to algorithms in general and artificial intelligence (“AI”) systems in particular is the need for greater transparency, often focusing on the disclosure of the variables employed by the algorithm and the weights given to those variables. This Essay argues that any meaningful transparency regime must provide information on other critical dimensions as well. For example, any transparency regime must also include key information about the data on which the algorithm was trained, including its source, scope, quality, and inner correlations, subject to constraints imposed by copyright, privacy, and cybersecurity law. Disclosures about pre-release testing also play a critical role in understanding an AI system’s robustness and its susceptibility to specification gaming. Finally, the fact that AI, like all complex systems, tends to exhibit emergent phenomena, such as proxy discrimination, interactions among multiple agents, the impact of adverse environments, and the well-known tendency of generative AI to hallucinate, makes ongoing post-release evaluation a critical component of any system of AI transparency.
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2024 Professor Christopher Yoo