Artificial intelligence (AI) has returned to the forefront of public discourse, sparked by the AI Safety Summit recently held in London and the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. These recent developments mark an initial wave of government responses to the increasing ubiquity of AI across economic sectors, and more broadly, society. Nevertheless, it is important to contextualize AI as simply one of many technological innovations that have captivated global interest over the last two decades. While many are eager to reap the potential benefits of AI, it remains uncertain if governments will regulate AI more effectively than past technological advancements. Historically, legislatures worldwide have struggled to regulate digital innovation proactively and to predict associated harms.

The main challenges to tech regulation in the United States can be narrowed to two key elements. First, a lack of political cohesion within the legislative body can lead to significant stalemate and paralyze the regulatory process. Second, the consolidated power held by tech companies creates problems. Kathy Thelen calls this leverage a form of “platform power.” Governments are thus faced with a unique regulatory challenge to reign in the monopolistic and privacy-undermining practices of tech companies, while also appeasing their electorates and preserving the consumer experience. It is against this context that AI regulation must be evaluated.

The US mostly took a ‘hands-off’ approach to the development of the data economy, and consequently, corporations were able to amass significant data about users and exploit this data with minimal regulation. Shoshana Zuboff, in her book "The Age of Surveillance Capitalism," explores the relationship between the digital economy and the commodification of data as a central tenant of our economy. A derivative effect of this data economy is the novel mechanisms by which end-user consumers can be subject to surveillance through their use of technology. Zuboff describes surveillance capitalism as a “direct intervention into free will” and “an assault on human autonomy.” The global economic and political structures failed to effectively address the privacy and antitrust concerns that arose during the first wave of the digital economy. As such, some of the largest internet players, such as Google, Amazon, and Meta, have only recently faced pressure from Congress concerning consumer privacy and antitrust issues. Zuboff’s insights should serve as a crucial warning about the regulation of AI.

The dissemination of AI into every corner of our economy and our lives raises severe privacy concerns. If we accept Zuboff’s argument that the world’s digital economies facilitate a modern form of surveillance—that of surveillance capitalism—does that mean it is time to consider whether AI has the power to completely demolish even our willing suspension of disbelief vis-à-vis our “privacy rights”? A post AI-economy could leave us with neither the privacy rights we seek, nor a perception or belief that we possess such privacy in the first place.

The mechanism by which end-user data is collected, controlled, stored, and even monetized is dominated by a small group of disproportionately large corporations. Recent antitrust developments, ranging from claims brought against these companies, to congressional policymaking, highlight the current system’s inadequacy in dealing with these matters. The application of machine learning in AI requires an immense amount of data sets, including data containing personally identifiable information. AI tools must identify trends and analyses using these data sets. If such capabilities are condensed into the hands of a few corporations, and the algorithmic knowledge is not made public, are governments set to repeat yesterday’s mistakes?

One explanation for why legislatures have been incapable of tackling these issues is that you cannot “regulate what you don’t understand”. Professor O’Reilly argues that regulators simply lack adequate knowledge about how corporations utilize data as a means to generate profit. O’Reilly suggests that “effective regulation depends on enhanced disclosures.”This concern is even more relevant given the lack of technical and scientific understanding on how AI, machine learning, and large language models even work. If regulators don’t possess adequate knowledge, how can consumers ensure Congress won’t repeat the same mistakes in the even more technically complex AI-economy?

Arguably, ‘AI and privacy’ is an oxymoron. AI allows for the processing and the analysis of data at an unprecedented scale. Access to such tools would prove incredibly lucrative for companies relying on, for example, targeted advertisements or personalized content curation. The use of generative AI has allowed tech corporations to “shift from being mere curators of content to becoming creators of entirely new content tailored to each individual user,” enabling a transition from “a surveillance capitalism which curates” to a “surveillance capitalism which creates.”

To what extent does Biden’s Executive Order tackle these privacy concerns? At first glance, it appears that the Order explicitly emphasizes—as is common within American privacy jurisprudence— the security of “personally identifiable information,” by ensuring that such information is evaluated against “the collection, processing, storage, and dissemination of commercially available information that contains personally identifiable information.” The order further emphasizes the role of agencies, in particular the Federal Trade Commission, to implement measures regarding privacy-related technologies, by not only scrutinizing companies developing AI technologies, but also the companies’ relationships with AI technology vendors and licensors.

While the Executive Order establishes some preliminary measures to counter some of the threats posed by AI, it nevertheless falls short of comprehensive regulation, which must be achieved through a concerted legislative effort. As it currently stands, Big Tech has developed methodologies to extract and then monetise key information from end-users. This new form of surveillance capitalism should warrant effective and proactive legislation to ensure the protection of consumer privacy rights.