
Criminal investigations increasingly rely on digital tools, often in ways that remain largely invisible to the public–and sometimes even to defense counsel. One of the most significant developments is the growing use of facial recognition technology, which compares images captured from surveillance footage against large databases of mugshots, driver’s license photographs, and other records to generate possible matches for investigators.
Law enforcement frequently characterizes these systems as mere investigative aids rather than evidence. But that description has come under increasing scrutiny. In many cases, police departments fail to disclose that facial recognition technology played any role in identifying a suspect in the first place, making it difficult for defense attorneys to challenge the reliability of the identification or the technology that produced it.
The investigative process typically begins with an image. Investigators pull a still image from a security camera, upload it into a facial recognition system, and the software searches a database for faces that resemble the one in the image. It then spits out a ranked list of potential matches, which detectives review to figure out who might be worth looking into further.
Most police departments, such as the New York Police Department, maintain policies acknowledging that a facial recognition match alone cannot justify an arrest. Officers are generally required to corroborate any match through additional investigative steps, such as reviewing other photos, speaking with witnesses, or gathering independent evidence. In theory, the technology is meant to generate investigative leads rather than definitive identifications.
In practice, however, the technology often disappears from the official account of investigation. Because facial recognition is treated as a lead rather than evidence, its use frequently goes undocumented in police reports. Officers may instead attribute the identification to more traditional investigative methods, such as witness statements or general “investigative means.” This characterization becomes particularly difficult to sustain when a facial recognition search directly triggers the witness identification procedure that follows– in such cases, the witness identification cannot easily be described as independent corroboration, because it traces back to the algorithmic search that preceded it.
Defense attorneys and legal scholars argue that this creates a serious problem: courts cannot evaluate evidence they do not know exists, and defense counsel cannot challenge the reliability of an identification process that was never disclosed.
That concern intersects with the constitutional disclosure rule established in Brady v. Maryland. The Supreme Court held in Brady that suppressing evidence favorable to the accused violates due process, reasoning that the government's interest in a criminal prosecution is not to win, but to see that justice is done. Under Brady, prosecutors must disclose evidence that is material and favorable to the defense–meaning evidence that creates a reasonable probability of a different outcome at trial, including information that could undermine the reliability of the prosecution's case. The obligation extends to evidence known by investigators, not just prosecutors themselves.
Defense attorneys increasingly argue that undisclosed facial recognition searches fall within that duty. If an algorithm helped identify a suspect, the defense has a clear interest in examining how the system performed, including error rates, the quality of images used, and whether multiple matches were returned. Without disclosure, defendants lose the ability to challenge the technology or the investigative decisions that followed from it.
Proponents argue that existing safeguards already adequately protect defendants. Because facial recognition technology results are classified as investigative leads rather than evidence, they are never introduced at trial, and the corroborating evidence that actually builds the case is already subject to full discovery. Prosecutors have advanced precisely this position in litigation, arguing that disclosure is unnecessary because the technology "was merely a tool, among many other investigative tools that law enforcement use daily to identify potential suspects." Some officials have further contended that mandating disclosure risks exposing proprietary technology and investigative capabilities in ways that could undermine future investigations.
However, real-world cases illustrate the risks of unchecked reliance on facial recognition. In 2019, New Jersey resident Nijeer Parks was arrested after facial recognition software identified him as a suspect in a shoplifting case. He insisted he was nowhere near the scene of the crime, and records later showed he had been about thirty miles away at the time. Parks spent days in jail before the charges were dropped. His case became an example of a wrongful arrest linked to facial recognition technology. Investigations suggest that hundreds of arrests may involve facial recognition leads, although many defendants never learn the technology was used because police rarely disclose it.
Despite these stakes, facial recognition remains largely unregulated. As a result, practices vary widely between jurisdictions, and many agencies lack clear rules about documentation or disclosure.
A small number of states have enacted laws requiring law enforcement agencies to disclose their use of facial recognition technology to criminal defendants prior to trial, including Washington, Montana, and Maryland.
Civil rights organizations, scholars, and policymakers have increasingly called for reforms. Many proposals are straightforward: require officers to document when facial recognition is used, preserve the results generated by the system, and make those records available in criminal discovery. Organizations such as NYU's Policing Project and the ACLU have argued that transparency is essential for courts and defense attorneys to evaluate the reliability of these technologies.
Facial recognition is only one example of a broader challenge confronting the criminal justice system. As policing incorporates increasingly complex technologies, the line between a simple investigative lead and the evidence that ultimately shapes a case becomes harder to draw. When an algorithm points investigators toward a suspect, that decision can influence everything that follows–even if the technology itself never appears in court. Whether courts and legislatures will eventually require full transparency remains an open question, but it is one that is increasingly difficult to ignore.
