Facial recognition technology is already ubiquitous in modern life. Anyone with an iPhone likely encounters it every time they unlock their phone. These iPhone users might feel comfortable with this because the software is confined to the intimacy of their own phone. But now, the TSA is modernizing its driver’s license checks with their own version of facial recognition.

In many ways, the TSA’s second generation Credential Authentication Technology (CAT-2) units are only slightly different: users scan their drivers license (or other photo identification) and the machine uses a camera to confirm that the face on the ID matches the face of the person using the machine. This is known as 1:1 (one to one) facial matching.

In contrast, 1:n facial identification systems (one to few) compare a passenger’s live image to a database of pre-selected reference photos. Some airlines such as Delta have taken this next step and removed the presentation of the ID altogether. Delta’s Digital ID system only requires users to get to the airport and look into a camera. No longer the simple task of matching the face on the ID to the person holding it — this system looks at the person in front of it and recognizes them as someone who the airline expects to be on a flight that day.

Delta’s system works by taking a photo of the traveler, encrypting it, stripping it of biographic information, and sending the image to U.S. Customs and Border Protection’s (CBP) facial biometric matching service. Ultimately, it is the CBP that verifies the image against some government issued photo ID.

For now, both TSA’s CAT-2 scanners and Delta’s Digital ID system are not mandatory. The CAT-2 screens show “clear language that notifies travelers they may decline having their photo taken” and “[t]ravelers under 18 are not photographed.” Additionally, TSA stresses that any personally identifiable information is handled consistent with the Department of Homeland Security’s Fair Information Practice Principles (FIPPs), meaning that TSA deletes the information after confirmation of identity. Delta’s implementation is entirely opt-in.

Even assuming the government acts in good faith, cause for concern remains. In 2019, the CBP suffered a “major cybersecurity incident” during a biometric pilot program. A report on the incident found that a subcontractor had transferred data including traveler images to its own company network. The subcontractor was then the victim of a cyber attack which compromised all of the data on their private network, including the biometric data.

It is no surprise then that many are concerned about the proliferation of facial recognition and the potential for misuse or mishandling of such technology. A few months ago, several U.S. senators introduced a bill titled the “Traveler Privacy Protection Act of 2023” which, if passed, would severely limit facial recognition in airports. Specifically, it would forbid the TSA from using facial recognition technology for any purpose unless expressly authorized by Congress at some future time. Furthermore, it would require existing facial biometric data to be disposed of.

One of the bill’s sponsors, Senator Jeff Merkley (D-OR), has a history of skepticism concerning the TSA’s facial recognition technology. In February of last year, Merkley and a handful of other senators issued a letter to the TSA Administrator expressing concerns about ability for travelers to opt-out, racial discrimination, and data privacy (citing the 2019 CBP incident).

Among the chief concerns is the racial impact such systems can have, even without intentional discrimination. One study from the National Institute of Standards and Technology (NIST) examined nearly 200 facial recognition algorithms and found concerning results. For 1:1 matching (like the TSA’s CAT-2 scanner), the NIST team found “higher rates of false positives for Asian and African American faces” relative to images of Caucasians, with differentials ranging from 10 to 100 times depending on the algorithm. NIST also found similar discrepancies in native groups. In 1:n systems, the NIST team “saw higher rates of false positives for African American females.”

Crucially, the NIST team noted that these discrepancies were not found in all the algorithms they tested, suggesting that some algorithms are better than others. Regardless, these results underscore how important it is to ensure that the government and private organizations are using the least discriminatory algorithms and not merely ones that are passable under most circumstances.

New technology makes tasks easier, and using facial recognition software to check passenger identity is no different. Unfortunately, while such technology may make things easier for people on average, other individuals may face greater difficulties than before (with the distinction falling along racial lines). Additionally, facial recognition technology in airports could make hacker’s jobs easier if data is not securely protected. All these considerations fuel the debate over if and how this technology should be deployed. However, as one retired TSA official stated regarding facial recognition’s ever-expanding use in our private lives, its use in airports “is here to stay.”