Surveillance Under the Mask of Identification

We may imagine any number of circumstances in which the government might need to know whether someone is who she says she is. For example, the government might reasonably need to verify the identities of visa applicants. Similarly, local and federal law enforcement agencies are tasked with finding and identifying suspects efficiently and effectively. At first, it may seem simple to support these agencies’ use of new technology that promises to help them in their identification projects. However, there has been recent pushback—both legal and political—against some of the most popular methods of technological identification, and for good reason. In essence, these identification projects shepherd in wide-ranging surveillance tactics under the guise of a more discreet project.

Citizens and the government will need to balance the necessity of these processes and their privacy tradeoffs. Ultimately, what we decide is worth the privacy risks should depend significantly on the efficacy of the process and the larger surveillance regime into which these identifications tools fit. Below, I consider two examples of governmental identification processes that rely on technology in different ways, the types of challenges they face, and the resulting decision taxonomy we face as citizens.

Social Media Screening of Visa Applicants

The Immigration and Nationality Act (INA) codifies specific provisions requiring visa applicants to disclose identifying information about themselves. Last year, the State Department decided that collecting social media identifiers was a “necessary” addition to this process and updated immigrant and nonimmigrant visa applications to include a section where applicants must list all social media handles that they have used in the last five years on twenty different platforms.

In early December, the Knight First Amendment Institute at Columbia University [1] filed a lawsuit challenging this change under the Administrative Procedure Act and the First Amendment, alleging that the social media registration requirement chills speech and deprives American citizens of their constitutional rights as listeners. The complaint further argues that identification on social media is a particularly unreliable method of identification. Of course, many social media platforms allow users to register under whatever name they wish, making the proposition that these identifiers are useful—or necessary—dubious. The rule additionally provides that the social media handles be retained for a century. The Knight Institute notes that this information is then “shared within the U.S. government, and also disseminated, in some circumstances, to other governments.”

This lawsuit will wind its way through the courts, but that it was filed is evidence that this sort of technological identification process implicates concrete and controversial tradeoffs.

Facial Recognition Technology

In 2013, San Diego County rolled out a pilot program of the Tactical Identification System (TACIDS). TACIDS authorized officers to take pictures of people in the field, which were aggregated in a database that was shared with other government agencies. Over the last three years, officers registered over 65,000 scans. Agencies like ICE have reportedly used the database to identify people who made agents’ “spidy [sic] senses” tingle.

Recently, however, California Governor Gavin Newsom signed A.B. 1215, a three-year moratorium on biometric surveillance, which highlights the immutability of biometric identifiers. Further, there is good evidence that the current technology is biased and unreliable. After a request from the Electronic Frontier Front (EFF), San Diego County decided to suspend their facial recognition program consistent with the bill.

At least ten cities across the country have also now adopted measures that allow citizens to control the surveillance technology available to local law enforcement. The model has worked so well that EFF recently launched a task force called About Face with the goal of ending the government use of facial recognition technology for identification purposes. About Face encourages local communities to pass legislation that defines and prohibits government uses of facial recognition technology.

Privacy-Privacy Tradeoffs in Identification

I highlight social media and facial recognition surveillance because they have enormous technological and practical differences when used as identification tools. Where social media surveillance might be uniquely ill-equipped for identification projects because of the ease of pseudonymity, facial recognition surveillance is dangerous because of how certain law enforcement expects affirmative matches to be. Our social media handles are endlessly fungible, our faces less so. Importantly, though, these surveillance methods are not contextually limited. The government already uses biometric surveillance at borders, and local cops often surveil social media accounts. The distinction in kind, however, is relevant to the ways we grapple with the privacy tradeoffs concomitant with these forms of identification.

These processes are surveillance techniques, as applied. Proponents may argue that the tradeoffs—in both cases of border entries and local policing—are privacy-safety balances. The problem with this framing, however, is that there is but a tenuous link between these sorts of identification interests and safety. ICE’s decision to scan someone’s face on the street because of a hunch cannot reasonably be explained in terms of a safety interest. Nor can the deportation of a Harvard freshman whose friends criticized the U.S. government on social media. In both these cases, and in the vast majority of cases where the government uses these technologies, the privacy tradeoffs are best understood when balanced against other privacy interests, not safety considerations.

So when evaluating these tools, we should consider the potential privacy-privacy tradeoffs and respond accordingly. Returning to our examples, the social media registration requirement establishes a system where the government restricts expressive and associational privacy to the benefit of other types of searches during the visa process. The privacy implications on individual Americans are inevitable, as their social media activity will be captured if they interact with anyone applying for a visa. In this sort of case, the tradeoff is difficult to define because there is no guarantee that the government will forgo any of their other established identification processes. Were identification a binary process, one would be right to wonder what benefit the registration requirement offers. And because identification is rightly understood as an evidentiary amalgam, there is good reason to think that the added certainty of a positive identification is not worth the privacy harms.

Facial recognition technology requires we confront a different, essentialized identification system. Rather than requiring over-collection in an attempt to solve the perception of insufficient information, the privacy concerns with facial identification technology focus on the assumed certainty of the result—as Congress has highlighted in hearings and the Government Accountability Office noted in a report on the FBI’s use of facial recognition technology. Here, the tradeoffs are easier to spot. The technology’s biases mean that improper searches, and even wrongful convictions, are likely, and the only way to decrease the occurrences of these false positives is to scan more faces.

Thus, identification attempts should be recognized and evaluated under the framework of government surveillance and privacy tradeoffs. Both social media screening and facial recognition, though they sit on opposite ends of the technological spectrum, implicate classic surveillance strategies. The government must figure out ways to identify individuals, but mass surveillance need not be—and should not be—one of them.

Footnotes

[1] The author served as an extern at the Knight Institute. He is no longer affiliated with the Institute. This post does not reflect any of the opinions, understandings, or beliefs of the Institute or any of the individual lawyers involved in the lawsuit.