Imagine: You click on a news clip and see the President give a controversial speech at a conference. The atmosphere and dialogue seem real, and you decide to share this recording with all of your friends. Soon, they do the same. Within hours, everyone has seen the clip, but only later do you learn that the President’s face was superimposed onto another body, audio fabricated, and that conference never to have happened. This is the reality of deepfake media manipulation.
Deepfakes use an artificial intelligence method called deep learning to make images and recordings of fake events, sometimes entirely from scratch. This process typically happens in two steps. Step one is to use a neural network to extract a face from a source image and encode that into a set of features and possibly a mask. Step two is to use another neural network to decode the features, upscale the generated face, rotate and scale the face as needed, and apply the upscaled face to another image. More advanced AI techniques can vastly improve the resulting product. Generative adversarial networks, for example, do this by pitting two neural networks against each other. Through such careful dissection and disassembly of each pixel of an image or the dissection of a short audio clip of a voice, and using AI predictive technology, the software can create a doppelgänger doing something entirely different from the original image or sound clip, limited only by the imagination of the creator.
Recognizing the vast destructive potential of deepfakes, major private sector companies and research institutions have organized efforts to curb deepfake misuse and weaponization. Notably, Microsoft has announced its Microsoft Video Authenticator, software able to analyze photos and videos to provide a percentage chance that the content has been artificially manipulated. In addition, a research team at Columbia University has, for its part, created an algorithm that identifies pieces of images that have been repurposed from other images. Similar concepts could potentially be applied to video data. Perhaps the most daring technological solution is another algorithm that detects and calculates pulse for subjects in a video by measuring the frequency of subtle color changes in tissue. This algorithm would then flag altered videos because an AI-generated video of humans would not exhibit such subtle variations. However, pessimists doubt whether defense (detection) can ever conceivably catch up with offense (creation).
From a legal standpoint, given the lack of legislative attention, especially at the federal level, it might be best to focus on rectifying harms to the victims of deepfakes. State tort law appears to be the most intuitive pathway to remedy. Since common law tort actions are filed under state law, they must consider the nuances of common law doctrines. First, although such causes of action are similar across the fifty states and other U.S. territories, they are not identical. Second, in most jurisdictions, when victims are private citizens, they are afforded increased protection measures, while privacy tort actions brought by public officials and public figures call for an “actual malice” standard. Actual malice, per the Supreme Court’s decision in New York Times Co. v. Sullivan, is the knowledge that it was false or with reckless disregard of whether it was false or not. It is notable that all deepfakes, by their nature, qualify for the level of actual malice.
Pending case specifics, a cause of action for defamation may be the ideal avenue for a deepfake victim. Defamation occurs when some communication tends to harm the reputation of another as to lower them in the estimation of the community or to deter third persons from associating or dealing with them. For video data, which is permanent, as opposed to descriptions thereof, defamation actions arising from deepfakes would likely fall under the purview of libel. In a libel cause of action in New York, the plaintiff must typically plead: (1) a written false and defamatory statement of fact concerning the plaintiff; (2) that was published by the defendant to a third party; (3) due to the defendant's negligence or actual malice, depending on the status of the person libeled; and (4) special damages or per se actionability. Of these four factors, elements one and four are likely to elicit deeper consideration by courts. Deepfakes obviously lacking in disgraceful content and made with humorous intent may be easily disposed of. However, examples likely to be interpreted as fact and damage the victim’s reputation, such as deepfake pornography, are likely to satisfy the first hurdle. In many instances, a deepfake creates a substantial risk of financial harm to its victims because of the inherent value of one's reputation. This problem must be exacerbated by the fast pace at which news, particularly harmful news, spreads online, making it likely that a genuinely damaging deepfake would satisfy the fourth element of libel.  Therefore, defamation may be the optimal first avenue for any deepfake victim.
A deepfake victim may also consider pursuing a privacy action, in which case publicity in false light claim appears to be the most promising path to redress. In such a claim, the plaintiff would need to show (1) that the false light in which they were placed would be highly offensive to a reasonable person and (2) that the actor had knowledge of or acted in reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed. Deepfakes necessarily place their subject before the public in a false light. Furthermore, in situations involving damaging deepfakes, computer-generated videos could reasonably be said to ascribe conduct to an actor, in which they did not participate or likely consent to the dissemination of video suggesting such participation.
Despite any potential for litigation success under the defamation, privacy, or other tort claims, victims of deepfakes may find it challenging to receive any significant financial compensation for harm suffered. Deepfakers, likely individuals, may simply lack sufficient monetary funds to compensate the victim. The victim may choose to go after the publisher, likely a wealthier website, which hosted the deepfake. However, in such situations, publishers will likely be shielded from liability by the broad protections available under § 230 of the 1996 Communications Decency Act (CDA). This provision protects providers of interactive computer services (including hosting websites) against suits based on torts committed by users of these platforms.
Deepfakes are a new yet real problem. The technology used to create them is readily available to broad classes of Internet users. Furthermore, although efforts to combat the weaponization of deepfakes are underway, they do not appear to keep pace with the rapid development of technological underpinnings behind deepfakes. The legal solutions to victims of this problem seem uncertain at best, especially in the face of broad First Amendment defenses likely to be raised by deepfakers. Moreover, the costs of litigation and CDA § 230 immunity for hosting services are likely to dissuade victims from seeking legal remedies. This reality should urge vigilance among all Internet users, especially public figures, and compel the legislature to reflect on the legal frameworks that attempt to regulate the Internet that has changed since 1996.
 Amir-Khalili A. et al. (2014) Auto Localization and Segmentation of Occluded Vessels in Robot-Assisted Partial Nephrectomy. In: Golland P., Hata N., Barillot C., Hornegger J., Howe R. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2014. MICCAI 2014. Lecture Notes in Computer Science, vol 8673. Springer, Cham. https://doi.org/10.1007/978-3-319-10404-1_51.
 Top of Form
N.Y. Times Co. v. Sullivan, 376 U.S. 254, 84 S. Ct. 710 (1964).Bottom of Form
 Gleason v. Smolinski, 319 Conn. 394, 125 A.3d 920 (2015).
 Pring v. Penthouse Int'l, Ltd., 695 F.2d 438 (10th Cir. 1982).
 David A. Anderson, Reputation, Compensation and Proof, 25 WM. & MARY L. REV. 747, 766 (1984).
 Restat 2d of Torts, § 652E.
 Russel Spivak, "Deepfakes: The Newest Way to Commit One of the Oldest Crimes, 3 Geo. L. Tech. Rev. 339 (2019).
 47 U.S.C.S. § 230 (LexisNexis, Lexis Advance through Public Law 116-193, approved October 30, 2020).