It is axiomatic (or at least plausible) that there is insufficient common law or statutory scaffolding for a robust and accountable internet media landscape. But the federal regulatory stagnation is not surprising. The American public and our courts are still faced with the vexing question of what social media—and for that matter, the internet—even is and why it matters to the future of our democracy.  

The first idiosyncrasy of this issue is its political valence.  Neither conservatives nor liberals are clear-eyed on the virtues of social media regulation, because (aside from poor understanding the internet writ large) smart people could disagree about what the debate is even about.  In one telling, liberals want to trust-bust and conservatives want to protect free speech.  In another telling, conservatives want a platform to circumvent liberal media while liberals want to uphold “communications decency” (the namesake of Section 230’s enacting legislation).  And while the First Amendment is unquestionably at stake, maybe this is more about the future of social technology, rather than the future of free speech.

Second, there is the jurisprudential divide. Some scholars believe that tech giants are trusts that should be broken up. Columbia Law professor and FTC Chair Lina Khan has been a famous proponent of this philosophy, especially in recent efforts in federal court to break up Facebook (now Meta).[1]  Others believe that the platforms are natural monopolies and should be regulated as public utilities or common carriers.  Here, social media may be a natural monopoly, but can be regulated like an information rail carrier. This theory draws on centuries of legal precedents from railway transport and more aptly, internet service providers. Tangentially related is Tim Wu’s “net neutrality” notion. [2]  In fact, the very statute that enraptures contemporary pundits on the topic—Section 230 of the Communications Decency Act—is codified as a “Common Carrier Regulation”.[3]  A key distinction must be made between common carrier theories and public utility theories:  The latter applies to circumstances (commonly in energy law) in which a company contracts with a government to provide price-controlled services in exchange for a market monopoly.[4] [5]  Here, social media is like an information grid system.  With electricity grids, the federal government has jurisdiction to regulate interstate transmission and wholesale energy transactions, while states have jurisdiction over generation and retail transactions.[6] 

The public utility perspective is least plausible, because it would require governments to contract for social media services for their constituents (or risk a 5th Amendment takings violation).[7] But it boasts certain analogical opportunities for analyzing the regulatory principles in play.  Just as in net metering of roof-mounted solar units, each social media user produces their own content.  In return, they are connected to a grid of other users. But each user cannot dictate everything about where their content goes or how it travels. We know from energy law that transparency of ratemaking and procedure is a key function of public regulators.  Transparency in how users receive their media is perhaps in the public interest but would not be a boon for social media firms: No media firm would be keen to publish their trade secrets about recommendation algorithms, nor have users demonstrably benefited from acknowledging “cookies” warnings. Further, media companies (like energy utilities) would expect something in return for transparency. (Perhaps Section 230’s liability protections are just that gift, but more on that elsewhere.). Instead of (or in addition to) transparency, a key principle could be certainty. But how can regulation create certainty for internet media companies and the rest? 

One option could be to structure contracts such that media companies offer assurances to advertisers that their ad will not appear next to hate speech, known and widely disseminated misinformation, graphic violence, etc. This would give advertisers and the public some certainty.  Advertisers would gain some certainty that their brand is insulated from appearing next to violent content. The public could hold advertisers—not just Meta—accountable if they chose to advertise to users posting and endorsing false or violent content.  Such assurances would come at a premium, and likely need to be accompanied by incentives for media companies, as they would be left responsible for investing billions in policing their platforms for public benefit. Social media companies have been pursuing such solutions already, but the legal norms and infrastructure are perhaps still developing.

 

[1] https://www.newyorker.com/news/daily-comment/why-facebook-is-suddenly-afraid-of-the-ftc

[2] https://www.wired.com/story/no-facebook-google-not-public-utilities/

[3] 47 U.S. Code Part 1

[4] https://www.wired.com/story/no-facebook-google-not-public-utilities/

[5] Pub. Utils. Comm'n v. Attleboro Steam & Elec. Co., 273 U.S. 83, 47 S. Ct. 294 (1927)

[6] Pub. Utils. Comm'n v. Attleboro Steam & Elec. Co., 273 U.S. 83, 47 S. Ct. 294 (1927)

[7] https://www.wired.com/story/no-facebook-google-not-public-utilities/