Deepfakes are hyper-realistic altered sounds, images, and videos generated through artificial intelligence. Although there are entertaining examples of deepfakes, the vast majority are harmful. In fact, a research company that has been tracking deepfake videos since December 2018 has found that between 90-95% of these videos are nonconsensual porn. Other harmful examples include scams that use the voice of one’s loved ones to ask for money and videos that spread misinformation about political candidates by putting words in their mouths. Is the law equipped to address the harms caused by deepfakes? Tort law provides various potential causes of action, including defamation and the privacy torts of false light publicity and appropriation of likeness.

Defamation. Liability for defamation exists where there is a false and defamatory statement about another that is published to a third party, with at least negligence on the actor’s part. A challenge here is that it is difficult to prove that an altered image is a statement of fact. The defendant may claim that there are indicators that the image is fake, including context, such that a reasonable person would not perceive it as a statement of fact. Moreover, since the plaintiff’s defamation claim is counterbalanced with the defendant’s First Amendment rights, this is frequently an uphill battle for any plaintiff.

False light publicity. The main distinction between defamation and false light is that “defamation addresses harm to reputation in the external world, while false light protects harm to one's inner self.” To create liability for false light publicity, the content must be highly offensive to a reasonable person, it must be “publicized,” and the actor must have “had knowledge of or acted in reckless disregard as to the falsity of the matter.” Since whoever publishes a deepfake image or video has knowledge that it is false, the final factor will nearly always be met. However, the “publicity” prong requires the matter to be communicated to a large number of people. A false light action may not be available if the deepfake is shared with a small group or on an online platform with a limited audience. Furthermore, not all states recognize false light publicity as a cause of action separate from defamation. 

Appropriation of name or likeness. An actor who appropriates the name or likeness of another for his or her own use is liable for invasion of privacy. In Hamilton v. Speight, the plaintiff, a former professional athlete, claimed that defendants unlawfully used his likeness in their video games. The plaintiff and the character were similar in their physical appearances, in their backgrounds as athletes for a team with the same name, and in the characteristic way they dressed.[1] The significant differences that ultimately led to the court deciding that the defendant’s First Amendment rights prevailed were the simple facts that the plaintiff did not fight fictional creatures in real life and that the character’s persona was distinct from the plaintiff’s.[2]

The court used the “transformative use test” and determined that the product was so transformed “that it has become primarily the defendant’s own expression.”[3] In contrast, where a game featured “exact depictions of [a band's] members doing exactly what they do as celebrities [i.e., singing and playing music],” the California Court of Appeals found no transformative use.[4] Given the likely parallels between the fact pattern in Hamilton and the potential scenarios in deepfake cases, it is plausible that the transformative use test could be expanded to encompass the realm of deepfake technology. Although Hamilton did not involve deepfake manipulation, the court's emphasis on the transformation of the plaintiff's likeness and persona suggests a framework that could be applicable to cases where individuals are digitally replicated through deepfake methods.

Common law is meant to evolve with the times. As technology continues to advance, it will become increasingly harder to distinguish manipulated content from authentic content. This inherent difficulty in discerning the authenticity of deepfake content intensifies the potential for harm. The harm inflicted by a well-crafted deepfake mirrors that caused by a legitimate video, since viewers will not be able to distinguish real from fake. Recognizing this similarity in harm may facilitate the application of the existing legal doctrines explored above to deepfake-related cases.

As a final note, we may ask whether a regulatory response to deepfakes would be more effective. Federal legislation has been proposed, most recently the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act in October. While federal law could provide a standardized and more comprehensive approach, crafting legislation that is both effective and flexible enough to adapt to the rapidly evolving nature of deepfakes is a complex task. Some states have passed their own versions of deepfake regulations with a more focused scope. For example, a New York law targets the unlawful dissemination of digitized intimate images, while California has implemented a law prohibiting the distribution of deceptive political media within a specific timeframe.

State laws, while more specific and targeted in addressing particular types of deepfake misuse, may fall short in encompassing the entirety of potential harm. Furthermore, the pace of technological advancement often outruns the pace of the legislative process. In the end, it is a combination of both specific regulation and a common law recognition of the harm caused by deepfakes that will be the most effective in addressing the issues associated with deepfakes.

 

[1] Hamilton v. Speight, 827 F. App’x. 238, 4 (3d Cir. 2020).

[2] Id. at 5.

[3] Id. at 3.

[4] No Doubt v. Activision Publ’g, Inc., 122 Cal. Rptr. 3d, 1034 (Cal. Ct. App. 2011).