Earlier this month, an Australian mayor threatened to sue ChatGPT for defamation over the chatbot’s claim that he was found guilty in a corporate bribery scandal and served prison time. In actuality, the mayor was a whistleblower in that case who has since made a career of combatting this type of corruption.[1] This begs the question, to what extent can an AI Chatbot be liable for defamation?

In the United States, proving a defamation claim can be notoriously difficult. In New York Times Co. v. Sullivan (U.S. 1964), the Supreme Court held that the First Amendment requires that a public figure alleging defamation carries the burden of proof to demonstrate “actual malice” by “clear and convincing evidence.” This standard means that the false statements must be made “with knowledge that it was false or with reckless disregard of whether it was false or not.”[2] In Australia, the burden for winning a defamation case is significantly less onerous, as the publisher’s intention is irrelevant.[3]

For Australian courts, the question of whether ChatGPT can be held liable for defamation seems to be a fairly clear “yes.” However, under the American standard, the question is more complicated. A chatbot is not a human being and cannot be reckless, negligent, or do anything “intentionally.” However, the developers of chatbot platforms can, and they are the ones who could potentially be held liable for defamation when the chatbot gets a little too creative with facts.[4]

The law governing websites that host third-party-generated information may be a useful analog in exploring the application of defamation law to ChatGPT. Wikipedia, for example, builds and maintains its corpus by relying on unaffiliated editors to contribute information and correct inaccuracies. For websites like these, Section 230 of the Community Decency Act shields companies from treatment as a publisher or speaker when a user posts defamatory speech on their platform.[5] In the past, victims of libel still had recourse - they can go after the individual who made the comment. However, with AI-generated comments, this would be impossible.

The question of whether this liability shield would extend to developers of AI software is unclear. However, legislators who helped draft Section 230 have recently expressed their belief that AI services are not covered by the statute.[6] In oral arguments earlier this year, Justice Gorsuch suggested a similar belief.[7] If the statute does not protect the hosts of AI platforms, we can expect the litigation floodgates may open.

Much of the discourse surrounding the Section 230 liability shield applies directly to the question at hand. Opponents of the liability shield say that platforms such as Facebook and Wikipedia should be responsible for monitoring their own users and bear the costs of developing tools to fight misinformation, whereas supporters of the shield believe that the value of free expression and the benefit of these platforms outweigh the costs of occasional misinformation. With AI, these questions become even more salient.

 

[1] https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/

[2] https://www.law.cornell.edu/wex/defamation#:~:text=To%20prove%20prima%20facie%20defamation,entity%20who%20is%20the%20subject

[3] https://fls.org.au/law-handbook/rights-activism-and-fair-treatment-at-work/defamation-and-your-rights/what-is-defamation/

[4] https://www.politico.com/newsletters/digital-future-daily/2023/04/06/so-youve-been-defamed-by-a-chatbot-00090874

[5] https://www.law.cornell.edu/uscode/text/47/230https://www.pbs.org/newshour/politics/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet#:~:text=That's%20thanks%20to%20Section%20230,by%20another%20information%20content%20provider.%E2%80%9D

[6] https://hai.stanford.edu/news/law-policy-ai-update-does-section-230-cover-generative-ai;

https://www.wsj.com/articles/chatgpt-libeled-me-can-i-sue-defamation-law-artificial-intelligence-cartoonist-court-lawyers-technology-14086034

[7] Id.