Abstract
Generative AI systems are increasingly relied on and are already actively reshaping how we think about privacy and data protection law. Models ingest and process vast amounts of personal and sensitive data, challenging assurances of compliance with legal frameworks like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) with increasing intensity. Machine unlearning is an emerging tool in practitioners’ attempts to address these challenges: the act of selectively removing or suppressing specific data, such as personal data that a data subject requests be deleted, from AI models as means of complying with legal obligations or policy goals. This Article’s much-needed analysis of unlearning’s technicalities and uses builds on recent critical scholarship that examines unlearning’s limitations at the technical and policy level. It delves deeper into machine unlearning’s implications for privacy and data protection law by situating it within privacy law’s broader ecosystem and proposing actionable pathways for integrating unlearning into enforcement and policy. Specifically, this Article evaluates whether privacy laws’ legal, remedial, and normative aspirations can be reconciled with the technical realities of machine unlearning in generative AI systems. It also contributes to the privacy profession by proposing a framework for integrating machine unlearning into broader privacy-preserving interventions. In doing so, the Article positions machine unlearning as both a vital new tool as well as a site of contestation in the evolving landscape of privacy and AI governance while providing a forward-looking roadmap for aligning machine unlearning with privacy law’s goals.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2026 Jevan Hutson, Cedric Whitney, Jay T. Conrad
