Who is liable, and why, if a self-driving car crashes?

Such questions, and their derivatives as they relate to new generative artificial intelligence interfaces like ChatGPT and Dall-E, are the subject of increasing public, governmental, and academic scrutiny. But liability frameworks for artificial intelligence are practically non-existent in the United States.

In March 2023, the United States Chamber of Commerce’s Commission on Artificial Intelligence, Competitiveness, Inclusion, and Innovation issued a call for regulating AI, expressing concerns related to national security, business, and technological competition.  Yet so far, such calls to action are about as far as the United States has gotten in attempting to regulate or create liability frameworks for AI. There remains no general federal framework for AI, minimal state regulation, and very little in the way of guidance on liability. In March 2022, the US National Highway Traffic Safety Administration promulgated a Final Rule that extended federal motor vehicle safety standards to autonomous vehicles, but did not significantly address product liability.

The European Union is comparatively further ahead. In September 2022, the European Commission issued an AI Liability Directive and a revised Product Liability Directive. These two Directives respectively govern claims rooted in EU law and claims rooted in Member State law. They continue in the vein of a “dual-track” system — under which a developer must satisfy both Member State and EU-wide regulations to be free from liability — while standardizing procedural questions like the disclosure of evidence and burdens of proof across Member States. Divergences between the two directives have been a source of recent criticism and have led to a call towards a more unified framework existing in a single regulation.[1]

The EU regulations classify AI systems as either high or low-risk. They classify general-purpose AI systems, which include generative AI systems like ChatGPT, to be high-risk, and covered by stricter liability rules. Prof. Philipp Hecker argues that this approach would render it difficult for generative AI developers “to meet the obligations of the AI Act for all possible uses cases,” and this will be particularly difficult for non-commercial developers, potentially hampering the development of generative AI in Europe.[2] “In the end, paradoxically, high-risk AI Applications risk becoming less safe as a result of the GPAIS liability rules because well-known and safe GPAIS model building tools cannot be used for them anymore.”[3] This analysis might be instructive in the United States context, too. 

A further problem for liability in AI is the difficulty of understanding precisely how “black box” AI systems actually work. This difficulty makes establishing proof and evidence particularly challenging. Recent academic work by Paulo Henrique Padovan et al. argues that Explainable AI (XAI), a set of techniques and methods explicating which conditions cause which effects in an AI system, should be adopted as a forensic tool in determining AI liability.[4]

The problem of explainability as well as the difficulty of fitting together state and federal AI liability standards will be particularly pressing for the United States as it responds to calls to develop such standards and as lawsuits concerning AI systems increase in number.

 

[1] See Philipp Hacker, The European AI Liability Directives 10, 2023. https://arxiv.org/pdf/2211.13960.pdf.

[2] Id. at 14.

[3] Id. at 15.

[4] See Paulo Henrique Padovan et al., Black is the New Orange: how to Determine AI Liability, 31 Artificial Intelligence and Law 133, 2023.