Law enforcement officers in the United States are increasingly turning to a new technology to make their jobs easier: facial recognition software. Although proponents of the technology tout the potential for speedier criminal investigations, this promise of swift justice likely comes at the price of individual privacy.

Facial recognition technology works by comparing the unique features of two faces to determine if they are a match. In order for law enforcement officers to use such technology, they need a large database of photographs of known individuals against which they can compare a photo of an unidentified suspect. For example, Clearview AI, a facial recognition platform used by thousands of law enforcement agencies, boasts a database of over 50 billion images.

The creation and maintenance of such massive repositories of individuals’ biometric data inherently threatens citizens’ privacy rights. For example, a Dutch watchdog recently fined Clearview more than $30 million for scraping and stockpiling billions of photos from the internet without the knowledge or consent of the individuals in the photos. This was not the first time Clearview has come under legal scrutiny. In 2022, Clearview settled a lawsuit brought by the ACLU that alleged similar privacy violations.

As is the case with many rapidly developing technologies, evolutions in facial recognition technology have outpaced the response of federal regulation. Despite the fact that a significant number of federal agencies use facial recognition software, there are no federal laws in place to limit how law enforcement uses the technology. The U.S. Commission on Civil Rights noted that, as of September 2024, federal regulations contained “no provisions requiring regular oversight of the government use” of facial recognition technologies. Furthermore, several agencies implemented the use of such software before requiring even basic training on facial recognition technology, according to a report by the U.S. Government Accountability Office.

In spite of the lack of federal controls, local governments have begun to push back against law enforcement’s use of facial recognition technology. For example, Maryland recently passed a law restraining the role facial recognition software can play in criminal investigations, and cities such as Austin, TX and San Francisco, CA have banned police use of the technology outright. However, enforcement of such localized restrictions has proven difficult. Some police officers in cities with bans in place have simply outsourced their facial recognition queries to neighboring police departments not subject to the same constraints. Also, police departments rarely divulge when they have relied on facial recognition technology during an investigation, further complicating any efforts to increase accountability.

Even with proper training and legal oversight of law enforcement agencies, flaws inherent in the software itself can still lead to devastating results for individuals. Although facial recognition technology has a high accuracy rate in general, it is significantly more likely to misidentify women and people of color. These misidentifications have led to at least seven wrongful arrests of innocent citizens in the past few years. For example, Robert Williams was wrongfully arrested by Detroit, MI police “in front of his two young daughters and wife” based on a faulty facial recognition match. Cases like Mr. William’s prove that concerns over police use of facial recognition software are not merely speculative.

Ultimately, facial recognition technology can likely increase the efficiency of law enforcement efforts. However, such gains in efficiency are not worth the erosions to individual privacy rights that would occur if law enforcement continues to use facial recognition software unchecked.