In July 2024, a deepfake video of then-Democratic presidential nominee Kamala Harris describing herself as the "ultimate diversity hire" spread rapidly across social media[1]. While this particular hoax was quickly debunked, it raised a troubling question: What if the deceptions weren’t so obvious? Imagine a deepfake video of her engaged in corruption—by the time the truth emerged, the damage would already be done. Can democracy withstand the onslaught of deepfakes? How can we regulate them without undermining free speech?
The Dangers of Deepfakes: Blurring the Line Between Truth and Fake
Deepfakes are AI-generated videos, images, or audio clips that manipulate real footage to create highly realistic but entirely false depictions of people. While misinformation in election campaigns is nothing new, deepfakes amplify its impact in unprecedented ways. Deepfakes have the potential to completely erase the line between truth and fabrication, making fake narratives appear hyper-real and far more difficult to detect and debunk than traditional photoshopped images or misleading quotes.
Beyond creating falsehoods, deepfakes undermine real evidence, leading to what’s known as the "liar’s dividend[2]"—allowing liars to dismiss truth as fake. For example, Elon Musk’s legal team recently suggested in court that past statements he made about Tesla’s self-driving capabilities could have been deepfakes[3]. If individuals can simply dismiss incriminating audio or video as an AI-generated hoax, it will become increasingly difficult to hold public figures accountable.
The greatest danger isn’t just the creation of fake content—it’s a world where no content can be trusted. As deepfakes erode our collective ability to believe what we see and hear, the very foundation of democracy—an informed electorate—will be at risk.
Legal Remedies and Challenges
While traditional legal remedies like defamation and false light privacy torts can address some of the harms caused by deepfakes, they are inadequate in dealing with the full scope of the problem. The origin of a deepfake is often unknown, making it difficult to track down the creator and hold them legally responsible. Moreover, they aim to protect individuals from reputational harm, but deepfakes present a far broader threat—the damage extends beyond the individual to the very fabric of public trust in elections and democracy itself. The burden to hold them accountable should not fall solely on the targeted individual. We need new regulations addressing them.
Legislative efforts at both the federal and state levels to regulate political deepfakes have taken two main approaches: requiring disclosure of deepfake content or banning it altogether.
Disclosure laws, which mandate that AI-generated content be clearly labeled, are generally more likely to withstand legal scrutiny. They align with campaign transparency rules and do not outright prohibit speech, making them more constitutionally defensible. The DEEPFAKES Accountability Act (H.R. 5586[4]) and state laws in Washington[5] and Michigan[6] require disclaimers on AI-generated political ads. However, disclosure laws can still be ruled unconstitutional if their requirements are too stringent. A federal judge blocked California’s AB 2839[7], which required a disclaimer to appear throughout an entire video in the largest font size used. The court ruled this was compelled speech that interfered with parody and satire. Nevertheless, the court noted that a more narrowly tailored disclosure requirement might be constitutional.
Lawmakers have also proposed banning certain categories of deepfakes. Texas[8] and Minnesota[9] have passed laws prohibiting highly realistic deepfake videos designed to mislead the public within specific pre-election windows. Such bans may be more effective than disclosure laws, as some viewers can miss disclosure and one can easily remove disclosure marks. For certain content with little to no redeeming value—such as videos intended to suppress voter turnout or falsely portray illegal acts to delegitimize an election, outright ban could be justified.
However, such efforts face major First Amendment hurdles. Courts have ruled that broad bans on manipulated political content risk suppressing protected speech, satire, and parody. For instance, California’s AB 2839, which sought to prohibit AI-generated election misinformation, was blocked by a federal judge due to concerns over overbreadth and free speech limitations. The judge ruled that banning “materially deceptive” AI-generated content “reasonably likely to harm the reputation or electoral prospects of a candidate” during elections was too broad and could chill legitimate political satire, parody, and even news reporting.
Banning deepfakes without suppressing satire remains extremely challenging. While there is a clear need to regulate deceptive AI-generated content, the line between misinformation and constitutionally protected speech remains legally and politically fraught.
Can Democracy Survive Deepfakes?
The Kamala Harris deepfake controversy was just a preview of what’s to come. AI-generated disinformation is becoming increasingly sophisticated, and current laws are struggling to keep up.
While free speech protections must be preserved, inaction is not an option. Given the legal and constitutional challenges surrounding deepfake regulation, lawmakers should focus on narrowly tailored measures that balance election integrity with First Amendment protections.
Ultimately, regulation alone won’t solve the deepfake crisis. Just as critical—if not more—is ensuring that the public remains media-literate. In an era where outsourcing critical thinking to AI is dangerously easy, we must guard against losing our ability to discern truth from fiction. We recognized immediately that the Kamala Harris video was fake, but that may not be the case in the future where we have lost our ability to think critically.
[1]https://www.wsj.com/us-news/law/election-deepfakes-prompt-state-crackdownsand-first-amendment-concerns-0b992e8e?
[2]https://www.californialawreview.org/print/deep-fakes-a-looming-challenge-for-privacy-democracy-and-national-security
[3]https://www.reuters.com/legal/elon-or-deepfake-musk-must-face-questions-autopilot-statements-2023-04-26
[4] https://www.congress.gov/bill/118th-congress/house-bill/5586/text
[5] https://legiscan.com/WA/bill/SB5152/2023
[6]https://elias.law/newsroom/client-alerts/michigan-introduces-disclaimer-requirements-on-political-ads-using-ai
[7] https://docs.justia.com/cases/federal/district-courts/california/caedce/2%3A2024cv02527/453046/14
[8] https://capitol.texas.gov/tlodocs/86R/billtext/html/SB00751S.htm
[9]https://casetext.com/statute/minnesota-statutes/crimes-expungement-victims/chapter-609-criminal-code/crimes-against-reputation/section-609771-use-of-deep-fake-technology-to-influence-election