Greg Parker

When the House Intelligence Committee met in June 2019 to discuss the most pressing threats to the upcoming 2020 presidential election, Committee Chairman Representative Adam Schiff opened with an alarming call to action: “We are on the cusp of a technological revolution,” he warned, “that could enable even more sinister forms of deception and disinformation by malign actors, foreign or domestic.”[1]

Mr. Schiff was referring to deepfakes—a video editing technique that utilizes principles of machine learning to manipulate words, faces, and actions of recorded individuals to make them appear to say and do things that they never actually did. Deepfake technology, which got an innocuous start in 1997 to facilitate movie audio dubbing,[2] has been used for everything from animating the Mona Lisa to turning comedian Bill Hader into Tom Cruise.[3] But those innocent videos belie a more nefarious reality. Ninety-six percent of deepfakes are being used to place non-consenting victims in sexually explicit videos.[4] Other deepfake videos, such as those mimicking Matteo Renzi, former prime minister of Italy, threaten the reputation of prominent political figures.[5] Politicians are frequent targets, and deepfakes are expected to be indistinguishable from real recordings by the time presidential primaries begin.[6] As such, Congressional action may be more forthcoming.

The Committee's concern reflects the government’s increasingly difficult task of addressing technologies that seem to emerge overnight and propagate online almost immediately. Deepfake videos have doubled in the past year alone: nearly 15,000 can now be found online.[7] This surge is due in part to the ease with which deepfake videos can be made. To create a deepfake, one needs only to download an app (like the Chinese-produced Zao), upload a source video, and record a message to be superimposed.[8] This technology has enabled a new kind of political dissent. In the words of House Intelligence Committee member Representative Val Demings, “[t]he internet is the new weapon of choice” for those seeking to undermine the democratic process.[9] As with the nuclear warhead, an arms race is arising around the increasing quality and quantity of deepfakes—and the institutional tools best suited to restrain them.[10] But choosing the right tool for the job depends on who you ask.

For Danielle Citron, a professor at Boston University Law School, the underlying problem is not the burgeoning technology behind deepfakes, but 47 U.S.C. § 230(c) of the Communications Decency Act of 1996.[11] That provision, entitled Protection for “Good Samaritan” Blocking and Filtering of Offensive Content, was enacted to encourage providers to block and filter pornography by giving them immunity from resulting liability. Put differently, providers like Facebook cannot be held liable for blocking pornography. However, after a 1997 Supreme Court ruling in Reno v. American Civil Liberties Union struck down several key components which prevented providers from hosting certain pornographic content,[12] one piece of the Act remained: content providers would be treated differently from content creators. After Reno, providers like Facebook could not be held liable for the content users post. Instead of encouraging providers to block egregious content, the provision as constructed allows providers to escape liability for under-filtering or failing to monitor the egregious content.

To this end, Ms. Citron has proposed an elegantly simple solution on the matter. She introduces a tort-like negligence standard to the language of 47 U.S.C. § 230(c)(1). Her proposal would amend the provision to state: “No provider or user of an interactive computer service [that engages in reasonable content moderation practices] shall be treated as the publisher or speaker of any information provided by another information content provider.”[13] Legislative immunity from tort is a subsidy, and Ms. Citron’s revision balances the pro-industry, zero liability framework of the provision as written with a potential strict liability framework. This proposed amendment may stave off further regulation by imposing a legal obligation to moderate websites, albeit at the cost of leaving the interpretation of “reasonable content moderation practices” to the courts.

However, incorporating a negligence clause into the existing legislation might be an insufficient response to the two-pronged conundrum that deepfakes pose. First, apps like Zao have become powerful weapons in the arsenal of political activists spreading online disinformation campaigns. Second, these apps offer a new way for online predators to artificially place victims in compromising situations. To address these nuances, both Congress and the California legislature have adopted measures in kind.

Whereas Citron’s solution places liability on content providers, Congress has passed a bill that places liability on content creators. The cleverly named Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019 (DEEPFAKES Accountability Act) imposes criminal liability on creators who intend to interfere in, among other things, an election and knowingly fail to include a watermark disclosing false content. The criminal liability introduced by DEEPFAKES also applies to sexually explicit creators who intend to humiliate or harass someone in a synthetic video, while establishing a civil cause of action for individuals and corporations to sue alleged violators.[14]

In contrast, California Governor Gavin Newsom signed two laws into effect that punish both providers and creators. AB 730 makes it illegal to create or distribute “materially deceptive audio or visual media” with the intent to deceive voters within 60 days of an election unless they are disclosed to be faked.[15] AB 602 provides a cause of action for victims to sue offending creators who knowingly distribute or disclose the synthetic video without consent.[16]

Of course, laws that shift liability onto providers have their detractors. The Electronic Frontier Foundation (EFF) warns that content provider immunity (an immunity not shared by European countries, Canada, or Japan) is what makes the U.S. internet so successful and a “safe haven for . . . controversial or political speech and a legal environment favorable to free expression.”[17] The organization further cautions that content providers are not equipped to prevent objectionable content from appearing on their sites, and that eroding the protections afforded under 47 U.S.C. § 230(c) might eliminate the services we rely on.[18] The ACLU echoed similar concerns when it urged Governor Newsom to veto AB 730, citing that the law will “result in voter confusion, malicious litigation, and repression of free speech.”[19]

Others believe the answer to combating deepfakes lies in self-regulation. In letters sent to 11 social media companies (including Facebook, Twitter and YouTube), Senators Mark Warner and Marco Rubio urged those companies to self-regulate. Indeed, such industry giants seem to recognize that failing to address the issue head-on invites unwelcome federal and state regulation. Mark Riedl, associate professor of computer science at Georgia Tech, says that tools exist to detect deepfake videos, and “[y]ou can counter-program against [them]” to automatically block offending videos and report infringing accounts.[20] To that end, Google has released 3,000 deepfake videos in the spirit of fueling technologies that will allow social media companies to detect and remove offensive content.[21] In China, Tencent has blocked users from posting Zao links on its WeChat platform.[22] Only time will tell if U.S. companies will follow suit. 

Whatever the solution to deepfake videos, one can only hope that lawmakers and technology companies act quickly. Or perhaps the answer is as simple as heeding deepfake-victim Tom Cruise’s advice: “People need to learn to be more critical.”[23]

 

-------------------------------------------------------------------

[1] National Security Challenges of Artificial Intelligence, Manipulated Media, and “Deepfakes” Hearing: U.S. House of Representatives Permanent Select Comm. on Intelligence, 116th Cong. 1 (2019) (Statement of Hon. Adam Schiff, Chairman Rep.), available at: https://docs.house.gov/meetings/IG/IG00/20190613/109620/HHRG-116-IG00-Transcript-20190613.pdf.

[2] Christoph Bregler et al., Video Rewrite: Driving Visual Speech With Audio, 2 (2019).

[3] Jon Blistein, Watch Bill Hader Become Tom Cruise, Seth Rogen in Eerie Deepfake Video, Rolling Stone (Aug. 13, 2019), https://www.rollingstone.com/culture/culture-news/bill-hader-tom-cruise-seth-rogen-deepfake-871154/.

[4] Giorgio Patrini, Mapping the Deepfake Landscape, Deeptrace (Jul, 10. 2019), https://deeptracelabs.com/mapping-the-deepfake-landscape.

[5] Deepfake Video of Former Italian PM Matteo Renzi Sparks Debate in Italy, Yahoo News (Oct. 8, 2019), https://uk.news.yahoo.com/deepfake-video-former-italian-pm-102505525.html.

[6] Kevin Stankiewicz, ‘Perfectly Real’ Deepfakes Will Arrive in 6 Months to a Year, Technology Pioneer Hao Li Says, CNBC (Sep. 20, 2019), https://www.cnbc.com/2019/09/20/hao-li-perfectly-real-deepfakes-will-arrive-in-6-months-to-a-year.html.

[7] Patrini, supra note 4.

[8] Zak Doffman, Chinese Deepfake App ZAO Goes Viral, Privacy Of Millions ‘At Risk’, Forbes (Sep. 2, 2019), https://www.forbes.com/sites/zakdoffman/2019/09/02/chinese-best-ever-deepfake-app-zao-sparks-huge-faceapp-like-privacy-storm/#32e9b1598470.

[9]National Security Challenges of Artificial Intelligence, Manipulated Media, and “Deepfakes” Hearing: U.S. House of Representatives Permanent Select Comm. on Intelligence. 116th Cong. 62 (2019) (Statement of Val Demings, Rep.) available at: https://docs.house.gov/meetings/IG/IG00/20190613/109620/HHRG-116-IG00-Transcript-20190613.pdf.

[10] David Axe, Inside the Deepfake ‘Arms Race’, The Daily Beast (Oct 10, 2019), https://www.thedailybeast.com/inside-the-deepfake-arms-race.

[11] 47 U.S.C.S. § 230, available at: https://www.law.cornell.edu/uscode/text/47/230.

[12] Reno v. A.C.L.U., 521 U.S. 844 (1997).

[13] National Security Challenges of Artificial Intelligence, Manipulated Media, and “Deepfakes” Hearing: U.S. House of Representatives Permanent Select Comm. on Intelligence. 116th Cong. 30 (2019) (bracketed language is the proposed addition to 47 U.S.C. § 230(c) by Danielle Citron, Prof.), available at: https://docs.house.gov/meetings/IG/IG00/20190613/109620/HHRG-116-IG00-Transcript-20190613.pdf.

[14] H.R. 3230, 116th Cong. (2019), Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, available at: https://www.congress.gov/bill/116th-congress/house-bill/3230/text.

[15] AB-730 Elections: Deceptive Audio or Visual, available at: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730.

[16] AB-602 Depiction of Individual Using Digital or Electronic Technology: Sexually Explicit Material: Cause of Action, available at: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB602.

[17] Section 230 of the Communications Decency Act, Electronic Frontier Foundation, https://www.eff.org/issues/cda230.

[18] Id.

[19]California Bans 'Deep Fakes' Video, Audio Close to Elections, Associated Press (Oct. 4, 2019), https://a24.asmdc.org/news/20191004-california-bans-deep-fakes-video-audio-close-elections-associated-press.

[20] Axe, supra note 10.

[21] Zak Doffman, Chinese Deepfake App ZAO Goes Viral, Privacy Of Millions ‘At Risk’, Forbes (Sep. 2, 2019), https://www.forbes.com/sites/zakdoffman/2019/09/02/chinese-best-ever-deepfake-app-zao-sparks-huge-faceapp-like-privacy-storm/#32e9b1598470.

[22] Catherine Shu, WeChat Restricts Controversial Video Face-Swapping App Zao, Citing ‘Security Risks’, Techcrunch (Sep. 3, 2019), https://techcrunch.com/2019/09/02/wechat-restricts-controversial-video-face-swapping-app-zao-citing-security-risks/.

[23] Blistein, supra note 3.