§230 is often referred to as the “twenty-six words that created the internet.” The statute, passed by Congress in 1996, has immunized websites and platforms from liability for decades. Specifically, §230 has been held to prevent websites and platforms for legal liability for almost everything posted on their sites. Arguably, this legal immunity has been essential to the development of the modern internet. Without it, companies like Facebook and Twitter would have faced potential liability for each of their users’ posts, forcing them to engage in potentially onerous vetting of posts, or preventing them from maintaining a viable business model at all. This interpretation of §230 has prevailed since its passage, even as communications online have become more complicated. But on October 3rd, the Supreme Court granted cert in an important case: Gonzalez v. Google. This case is likely to be path-breaking both because it is the first time the Supreme Court has weighed in on the interpretation of §230, and because SCOTUS’s intervention is likely to upend decades of precedent.

47 U.S.C. §230, the Communications Decency Act, was passed to ensure that websites (called “interactive service providers” in the statute) did not face any perverse incentives that would disincentivize them from moderating content on their sites. The statute immunizes websites from being treated as a publishers or speakers of content on their sites. Publishers face legal liability for the statements made by the people whose content they publish. Without this immunity, websites like Facebook could be legally liable for any statement made by their billions of users. Courts have interpreted this immunity broadly. They have found that a number of activities constitute publication: from allowing users to post statuses online, to making friend recommendations, to devising algorithms that boost certain content over other content. This broad interpretation of publication has made it nearly impossible to bring a social media site into court.

Critics have bemoaned the outdated nature of this kind of protection: back when websites often acted as passive “bulletin boards” for posters, it might have made more sense to think of their activity as “publication,” as opposed to production of content. But today, websites actively participate in the content-making process. These internet sites, particularly social media websites, construct the environment in which speech occurs: they actively amplify content through algorithms that push certain posts to the top of users’ feeds, they connect users to one another, they allow users to post pseudonymously, etc.. Each of these decisions impacts what content is produced on the site. Courts’ broad interpretation of §230 immunity has effectively prevented websites with enormous control over public discourse from being held accountable for their content-based decisions.

Two avenues exist for the resolution of this problem: the courthouse and the statehouse. In Congress, politicians on both sides of the aisle have suggested limiting §230 immunity through statutory revision. In the federal courts, there has been some suggestion that judges are looking to narrow the existing interpretation of §230. In his concurrence in Facebook v. Force, Judge Robert Katzmann argued that it “strains the English language” to hold that Facebook is acting as a “publisher” of user content when it recommends friends. Justice Thomas took an arguably more forceful stance in Malwarebytes v. Enigma Software Group, in which he argued the prevailing broad interpretation of §230 immunity reads “extra immunity into [a statute] where it does not belong.” He suggested that “in an appropriate case,” the court should “consider whether the text of this increasingly important statute aligns with the current state of immunity enjoyed by internet platforms.” 

This term, the court is set to make good on Justice Thomas’ promise. In early October, the court granted certiorari in Gonzalez v. Google. This case threatens to imperil the traditional interpretation of §230. The family of Nohemi Gonzalez, a college student killed in an ISIS attack in Paris, allege that Google’s subsidiary, YouTube, is partially responsible for her death. They argue that YouTube’s recommendation algorithm contributed to recruitment by ISIS, and thus indirectly contributed to the death of their daughter. In lower courts, Google successfully argued that §230 immunized them from liability for their algorithmic content recommendations.  The Supreme Court will hear oral argument about whether that interpretation is correct. That is: whether Google can be held liable for the design of its algorithms.

A finding against Google would radically change the state of internet law. Currently, platforms face nearly no legal liability for their design decisions. The use of algorithms by big tech platforms is pervasive and ever-increasing. An interpretation of §230 that does not immunize platforms for liability for the use of algorithms would likely subject platforms to a flood of litigation regarding their website design. And without the privilege of immunity, platforms would likely to have to invest considerably more resources into defending themselves in court, something that might incentivize platforms to limit their involvement in the curation of content on their websites.

Experts have expressed mixed views on the potential narrowing of §230. §230 has been lauded as necessary to the unimpeded development of online platforms, and essential to the development of the internet that makes up such an enormous part of our lives. Without its broad protection, advocates argue, platforms would face a crippling amount of legal liability. Furthermore, algorithms and curated newsfeeds are essential to connecting users with the information and people they’re interested in.

On the other side, critics point out that §230 has encouraged platforms to place profit-motive ahead of creating healthy speech environments. Algorithms are created to keep users engaged, and promote incendiary or false speech when it serves that end. Under this view, interpreting §230 as not to cover algorithms is both more consistent with the text of the statute (because algorithms are developed by big tech companies rather than users) and an important step in ensuring that these companies can face some accountability for their design decisions.

In the 28 years since its passage, the Supreme Court has never heard a case about the scope of §230. It’s clear from Justice Thomas’ statements on the matter that he is likely to vote to narrow that immunity. However, it’s as of yet unclear whether the other justices share his interest. Whichever way the case goes, it’s certain to be a landmark in the field of internet law.