Artificial Intelligence (“AI”) is quickly becoming a part of everyday life, and we are already starting to see the benefits it brings. Notwithstanding its advantages, AI presents unique difficulties for tort liability because it blurs the line between human responsibility and machine autonomy. Autonomous vehicles, AI healthcare, and household robots, while poised to make life easier and safer, have caused growing concerns among the legal community over who will be liable when AI machines cause harm.
Assigning liability in traditional tort cases is somewhat straightforward. Put simple, humans act and if those acts result in harm to another, the actor faces liability for that harm. AI complicates this framework. When a self-driving car gets into an accident, a human does not act at all. Instead, a machine does. Does this mean we should hold the AI coded into the car liable for the damages it caused? This would require treating that AI as a legal entity, like a corporation. Instinctually, this may seem permissible, but there are important differences between AI and companies that make treating AI as a legal entity complicated. At the moment, AI is considered property and thus does not have the same legal rights and responsibilities as a corporation. [1] Moreover, unlike a corporation, AI’s actions are not driven directly by human decisions. The beauty of AI is that it makes its own decisions, absent human intervention.
If AI is treated as property, then perhaps we should assign liability in the same manner as we do when a person’s property causes harm. For instance, in the infamous Pingaro v. Rossi case, Pingaro sued Rossi after she was bit by Rossi’s dog.[2] New Jersey, the state where the incident occurred, had a dog bite statute that imposed strict liability on the owner when his or her dog bites someone, regardless of whether the owner had knowledge of the dog’s dangerous propensities.[3] Therefore, Rossi was held liable for Pingaro’s damages.[4]
Strict liability presents an interesting framework from which to approach AI liability, but difficulties still exist. For starters, which party would be strictly liable: the owner of the AI machine or the developer? In the dog scenario, it would be unreasonable to hold the breeder of Pingaro’s dog strictly liable for breeding a dog that was prone to bite people. But in the context of a healthcare AI that misdiagnoses a rare disease, would the hospital that purchased the AI be strictly liable, or would the software developer who designed an imperfect AI? There is a strong case for both. By using the AI to diagnose diseases, the hospital subjected itself to the risk that the AI would make a mistake. However, it is not a reach to assume that the party in charge of creating the AI should be responsible for creating a faulty product and/or failing to disclose the risks to the purchaser.
As AI continues to evolve, the question of liability for harm caused by autonomous machines remains a critical legal challenge. While traditional frameworks like strict liability offer potential solutions, they must be adapted to account for the unique nature of AI, particularly the distinction between machine autonomy and human control. Ultimately, resolving AI liability will require a careful balancing act, ensuring that developers, users, and AI systems are held accountable in a way that reflects the complexities of the emerging technology.
[1] Brandeis Marshall, No legal personhood for AI, Patterns (Nov. 10, 2023), https://www.sciencedirect.com/science/article/pii/S2666389923002453 (last visited Nov 26, 2024).
[2] Pingaro v. Rossi, 731 A.2d 523, 525 (App. Div. 1999).
[3] Id
[4] Id.