Earlier this month, an Australian mayor threatened to sue ChatGPT for defamation over the chatbot’s claim that he was found guilty in a corporate bribery scandal and served prison time. In actuality, the mayor was a whistleblower in that case who has since made a career of combatting this type of corruption. This begs the question, to what extent can an AI Chatbot be liable for defamation?
In the United States, proving a defamation claim can be notoriously difficult. In New York Times Co. v. Sullivan (U.S. 1964), the Supreme Court held that the First Amendment requires that a public figure alleging defamation carries the burden of proof to demonstrate “actual malice” by “clear and convincing evidence.” This standard means that the false statements must be made “with knowledge that it was false or with reckless disregard of whether it was false or not.” In Australia, the burden for winning a defamation case is significantly less onerous, as the publisher’s intention is irrelevant.
For Australian courts, the question of whether ChatGPT can be held liable for defamation seems to be a fairly clear “yes.” However, under the American standard, the question is more complicated. A chatbot is not a human being and cannot be reckless, negligent, or do anything “intentionally.” However, the developers of chatbot platforms can, and they are the ones who could potentially be held liable for defamation when the chatbot gets a little too creative with facts.
The law governing websites that host third-party-generated information may be a useful analog in exploring the application of defamation law to ChatGPT. Wikipedia, for example, builds and maintains its corpus by relying on unaffiliated editors to contribute information and correct inaccuracies. For websites like these, Section 230 of the Community Decency Act shields companies from treatment as a publisher or speaker when a user posts defamatory speech on their platform. In the past, victims of libel still had recourse - they can go after the individual who made the comment. However, with AI-generated comments, this would be impossible.
The question of whether this liability shield would extend to developers of AI software is unclear. However, legislators who helped draft Section 230 have recently expressed their belief that AI services are not covered by the statute. In oral arguments earlier this year, Justice Gorsuch suggested a similar belief. If the statute does not protect the hosts of AI platforms, we can expect the litigation floodgates may open.
Much of the discourse surrounding the Section 230 liability shield applies directly to the question at hand. Opponents of the liability shield say that platforms such as Facebook and Wikipedia should be responsible for monitoring their own users and bear the costs of developing tools to fight misinformation, whereas supporters of the shield believe that the value of free expression and the benefit of these platforms outweigh the costs of occasional misinformation. With AI, these questions become even more salient.
 https://www.law.cornell.edu/uscode/text/47/230; https://www.pbs.org/newshour/politics/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet#:~:text=That's%20thanks%20to%20Section%20230,by%20another%20information%20content%20provider.%E2%80%9D