I. Terms of Service
“It’s free and always will be.” It's hard to fathom I once felt relief at the sight of this reassuring tagline when logging in to Facebook. Along with other early users, I signed up during Facebook’s relative infancy, at a time when the website was restricted to college students with a valid “.edu” email. As a freshman undergrad attending university out-of-state I had a particularly active engagement with the social network. I missed my friends, most of whom remained in-state for school, and Facebook gave me a fun, free, quasi-teleportation device to stay up-to-date and in-touch with them. Indeed, I felt social fulfilment from the engagement Facebook provided, but also a sense of dread. I could hardly afford my books for class—what if this fulfilment I’d come to rely on decided to charge for the service? At the time, the tagline, “It’s free and always will be,” helped alleviate that fear. Ironically, however, fifteen years and an eruption of exponentially dangerous privacy concerns later, the “free” use of services such as Facebook is now the source of dread.
The common business model employed by today’s technology platforms has been exposed by works like Shoshana Zuboff’s “Surveillance Capitalism” and the docudrama “The Social Dilemma,” among others. Surveillance capitalism, in short, is the broad surveillance, collection, and monetization of user's intimate behavioral data—what you like, what you don't, and, with shocking precision, the extent of exactly how much you like or dislike it. As a brief illustration, it's not difficult to imagine that platforms like Facebook and Instagram know which posts you look at, but did you realize they also monitor and store exactly how long you looked at it? or how many times? or whether you zoomed in? This is a small sample of the intimate behavioral data referenced above and, if you're like me, it tends to cause a feeling of unease; even for those of us with nothing inherently bad or embarrassing to hide from Big Brother.
Zuboff and other modern muckrakers have enabled a more candid public discourse regarding the contract which exists between user and platform—unmasking how these services were never “free” to begin with and actually pose an existential threat to personal autonomy and freedom of thought by gradually changing users behavior and perception of the world around them. Such changes can affect what you do, how you think, and who you are. The purpose of this essay is to discuss the viability of a legal choice to pay for services like Facebook and how such a right could help curb the inherent threats present in a business model that monetizes the intimate behavioral data of its users. This right would be rooted in federal regulation and applied to contracts where the primary business model is collecting, analyzing, and selling user data. Two larger questions, which, due to constraints of space will not be explored here, are (i) how or where to draw the line in determining what constitutes a company’s “primary business model,” and ultimately (ii) whether such a right would actually work to reduce the existential threats of surveillance capitalism. Due to foreseeable collective action problems like financial inequality among users and the indifference some may feel about their privacy, the choice alone of a subscription model could prove fruitless against such threats. But, while it may be true that this right could be a small knife in a battle against technology platforms who roll in tanks, when the stakes are this high, resistance is never futile.
II. Money or Data & The Right to Choose
A. A Sound Business Decision
From 2011 to 2019, Facebook’s annual ARPU (Average Revenue Per User) grew from $5 to about $29 (USD). This metric is calculated by dividing total revenue by number of users and “shows how effectively companies monetize their users.” Here, the relevance of understanding the ARPU of “free” technology platforms rests on the proposition that money generated by a subscription model could work to replace lost revenue when users (i.e. “subscribers”) opt to pay the platform directly instead of allowing their behavioral data to be monetized. With this configuration, users and platforms are able to realistically transition from “free” to paid because (i) the service would not be prohibitively expensive and (ii) fairly compensates the companies for their lost advertising revenue. Granted, however, when you consider the feasibility of a subscription model to a service that has seen financial growth of nearly 600% in just eight years, the future affordability of such a service is in question. But those concerns may hold little weight; technology platforms, and Facebook in particular, are in danger of “becoming victims of their own success.” As more users sign up, the “pool of potential new users” shrinks; but what’s more, specifically with Facebook, the number of active users has declined recently and is predicted to “remain flat or decline” in the future. These challenges faced by technology platforms suggest that ARPU has plateaued and is unlikely to continue the same exponential growth of the last eight years. Therefore, assuming a stabilized ARPU, users who choose a subscription contract can achieve some level of reliability that their costs will not mushroom.
B. Data Privacy as a (Fundamental) Right
The discussion above helped to illuminate the financial plausibility of a subscription model. Yet very few actually exist on the market—but why? Given the lack of paid options, there is an inference created that perhaps technology platforms see value in user data beyond what is reflected in the ARPU. Indeed, the type of data and behavioral patterns collected by these platforms has already demonstrated the ability to influence and modify human behavior. This can, with slightly varying perspectives, be defined as the intrinsic value of privacy and patterns of behavior. A tool such as this is incredibly powerful and inherently dangerous in the hands of those with even the best of intentions; but, as private companies, and completely optional services, technology platforms do not face the same privacy obligations imposed on entities such as state actors or common carriers when handling user data. This should be reviewed and reformed through federal regulation. Behavioral user data is so fundamentally sensitive and powerful that people should, at the very least, have the legal choice about whether or not they want to pay to keep it from being collected and monetized against them.
This right to choose, when applied to the types of contracts that presently exist between technology platforms and their users, could function as another useful tool against surveillance capitalism and help reduce the existential threats to personal autonomy and freedom of thought it poses. Some users may, of course, not have the means to pay for such a choice, while others simply might not care (today) about the consequences of their behavior being collected and sold. These concerns are valid, but should not preclude the adoption of such a right just because it may not be equally applicable to all users or produce systemic change on its own. Death by a thousand cuts might be slow, but can ultimately prove effective.