Big Tech May Escape a Narrowing of Section 230 Liability Protections After All

Francesca Huth

Can YouTube be held legally liable for algorithmically recommending videos made by terrorist organizations like ISIS? The Supreme Court heard oral arguments earlier this week in Gonzalez v. Google, which endeavors to answer just that. The case was brought by the family of an American exchange student killed in a 2015 terrorist attack in Paris. Plaintiffs allege that YouTube’s “Up Next” recommendation feature promoted radicalizing terrorist messages that contributed to the Paris attack.[1]

The case has been heralded as one with potential to change the internet by redefining the scope of Section 230 of the Communications Decency Act of 1996. Section 230 has long been a source of controversy. The law shields platforms from liability for most content posted by third parties.[2] Big tech companies argue that the immunity they currently enjoy under Section 230 is necessary to protect their free speech rights and maintain their ability to supply useful content to users.[3] Critics argue that current protections go too far in allowing tech companies to propagate hate speech and also disincentivize them from investing in harm reduction.[4] Now the question is whether the Section 230 liability shield extends to content that sites promote, not just host.

Prior to last week’s oral arguments, media headlines expressed concern that the Supreme Court would disrupt the internet with their decision. After two and a half hours of oral arguments, though, it seems the Supreme Court justices are skeptical about the consequences of altering Section 230.[5]

Conservative and liberal justices alike wondered about whether this decision might be better left to Congress, particularly because of the “difficulty of drawing lines in this area,” but also because of the potential economic harms that would stem from overhauling Section 230.[6] Justice Kagan remarked that plaintiffs would be “creating a world of lawsuits,” a worry that Justice Kavanaugh later echoed.[7] Eric Schnapper, attorney for the plaintiffs, argued that the implications would be minimal “because the kinds of circumstances in which recommendations would be actionable are limited.”[8] Plaintiffs argued that instead of being traditionally protected third-party speech, YouTube’s recommendations constitute their own speech and therefore fall outside the liability shield. Justice Alito responded that he was “completely confused” by the argument being made.[9]

Several justices expressed their uncertainty about which kinds of algorithms would be legally actionable. Justice Thomas remarked that algorithms are used to promote both harmful content like ISIS videos and innocuous content like cooking videos, and Justice Kagan noted that “algorithms are endemic to the internet.”[10] Schnapper stated that while algorithms are ubiquitous, they are only actionable if they are employed in a manner that, for example, promotes content created by ISIS.[11]

The justices also had several questions for Lisa S. Blatt, who represented Google, about just how far Section 230 protection extends. Blatt claimed that even if the algorithms were non-neutral but rather designed to push a certain message, they would still be protected.[12] An extreme version of this sentiment, and indeed one that Blatt seemed to be condoning, would mean that tech giants would not be held liable even for promoting the most violent, crude, and harmful videos. Allowing the richest companies in the world to escape liability for actively encouraging the most heinous behavior cannot possibly be what the drafters of Section 230 had in mind. While it looks like Google might win the case given the confusion surrounding Plaintiff’s oral arguments, there will likely be pushback, particularly from the court’s liberal justices, about the broad scope of Section 230 proffered by Blatt.

The Court may well leave this issue to Congress. As Justice Kagan remarked, the justices are “not like the nine greatest experts on the internet.”[13] The Supreme Court’s decision is expected to be released by early summer.

 

 

[1] Gonzalez v. Google LLC, 2 F.4th 871, 884 (9th Cir. 2021), cert. granted (Oct. 3, 2022).

[2] Rachel Lerman, Section 230:  The Little Law that Defined how the Internet Works, Wash. Post (Sept. 30, 2022), https://www.washingtonpost.com/technology/2020/05/28/what-is-section-230/ [https://perma.cc/3FKQ-FHWA] [https://web.archive.org/web/20230228000752/https://www.washingtonpost.com/technology/2020/05/28/what-is-section-230/].

[3] Id.

[4] Id.

[5] Brian Fung and Tierney Sneed, Takeaways from the Supreme Court’s Hearing in Blockbuster Internet Speech Case, CNN (Feb. 21, 2023), https://www.cnn.com/2023/02/21/tech/supreme-court-gonzalez-v-google/index.html [https://perma.cc/2KTP-LCWR] [https://web.archive.org/web/20230228001123/https://www.cnn.com/2023/02/21/tech/supreme-court-gonzalez-v-google/index.html].

[6] Lauren Feiner, Supreme Court Justices in Google Case Express Hesitation About Upending Section 230, CNBC (Feb. 22, 2023), https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html [https://perma.cc/SL8C-VHPE] [https://web.archive.org/web/20230228001301/https://www.cnbc.com/2023/02/21/supreme-court-justices-in-google-case-hesitate-to-upend-section-230.html].

[7] Transcript of Oral Argument at 111, Gonzalez v. Google LLC, No. 21-1333 (U.S. Feb. 21, 2023).

[8] Id. at 57.

[9] Id. at 34.

[10] Id. at 9.

[11] Id. at 10.

[12] Id. at 127.

[13] Id. at 45.