What do presidential candidates and adult film stars have in common?

The answer is not some crude political joke you might hear at a bar, but it may be equally disturbing. Deepfakes, deceptively real audiovisual products created using deep learning algorithms, are a rising concern in the United States. Two of the most alarming uses of deepfakes involve political deceit and non-consensual pornographic content.

There has been a dramatic increase in the number of deepfake videos on the internet over the past year, with one cybersecurity company reporting an 84% increase between December 2018 and July 2019 alone. By far the most common use of deepfakes, representing 96% of all uses observed, was in the creation of sexually explicit content involving prominent female celebrities.

However, as technology continues to evolve, some commentators fear that deepfakes could spell trouble in another domain: the 2020 election. Quoted in a recent article, adjunct NYU law professor, Paul Barrett, describes how deepfakes could impact the election both directly, by tricking voters into erroneously associating false statements with candidates, and indirectly, by fomenting “apathy, low voter turnout, and disillusionment with the entire political system.”

So, how can we remediate the existing problem of pornographic identity appropriation, and pave the way for an accountable and transparent election season? Technological solutions, such as using AI to identify altered videos, show promise for detecting altered videos. However, some argue that detection catches deepfakes too late, suggesting that new legislative initiatives are necessary to preempt harms.

In the remaining space, this post will introduce existing and potential legislative efforts, at both the state and federal level, and discuss some of the concerns associated with legislating deepfakes.

Legislative Overview

At the state level, there have been a number of recent legislative efforts aimed at addressing the rising tide of deepfake videos. For example, California recently passed a pair of bills focused on tackling both the issue of sexually explicit content (AB-602) and political deception preceding an election (AB-730). Importantly, AB-602 creates a private right of action against creators and knowing disseminators of nonconsensual sexually explicit, altered material.

Another notable piece of legislation in this space is Texas’ SB-751, which explicitly criminalizes the creation and publishing of “deep fake video[s]” within 30 days of an election. A unique aspect of Texas bill is its explicit reference to deep fakes, addressing videos “created by artificial intelligence” in particular. In contrast, the California legislation uses language applicable to a broad range of alteration techniques.

In addition to these state initiatives, there are several relevant federal proposals to consider. There are currently proposed bills in the both the Senate (S.3805 – Malicious Deep Fake Prohibition Act of 2018) and the House (H.R.3230 – DEEPFAKES Accountability Act of 2019) seeking to target deepfakes. The more recent DEEPFAKES Accountability Act would establish a visual or audio disclosure requirement for deepfakes, setting out both criminal and civil liability as well as establishing a private right of action for enforcement. Currently, both bills have been referred to respective committees in the Senate and the House.

Additionally, the Senate recently passed legislation (S.2065 – Deepfake Report Act of 2019) requiring “the Secretary of Homeland Security to publish an annual report on the use of deepfake technology.”

One final relevant piece of federal legislation to consider is Section 230 of the Communications Decency Act (47 U.S.C. § 230). Section 230 prevents computer service providers, such as Facebook and YouTube, from being held liable as “publishers or speakers” of information generated by others on their platforms, subject to some exceptions. For this reason, the Electronic Frontier Foundation (EFF) has characterized Section 230 to be “the most important law protecting internet speech.” Though the focus of Section 230 is much broader than deepfakes alone, it still provides an important protection for sites where faked material might be posted.

Concerns

As new legislative initiatives begin to take form and effect, there are a number of issues to consider. To start, the EFF has raised First Amendment concerns surrounding recent legislation, attacking both California’s AB-730 and the DEEPFAKES Accountability Act. The EFF argues that the broad scope and vague terms of these bills could have a chilling effect on free speech, and may not be necessary in light of existing laws dealing with harassment, defamation, and the like.

Additionally, some have challenged the efficacy of the DEEPFAKES Accountability Act and other new initiatives. Discussing the Act’s disclosure system, one commentator pointed out that many bad actors will not comply with disclosure requirements, or will edit disclosures out of others’ work, especially when protected through anonymous channels or bots.

Against recent suggestions to reduce the coverage of Section 230, thus increasing platform accountability to police deepfakes, the EFF has raised concerns that such action would not only impact large social media platforms, but also “small companies without the resources to defend against expensive lawsuits based on speech of their users.” According to the EFF, increased accountability to police “will also implicate a range of other forms of lawful and socially beneficial speech, as platforms censor more and more user speech to avoid any risk of legal liability.”

While new legislative efforts may be a necessary step towards preventing the harms that deepfakes create, it is important that officials do not pursue these methods blindly in a rush to satisfy their constituents. Misguided legislation in this space could not only fail to protect victims, it could also chill free speech and limit beneficial use of the internet.