Facial recognition is unreliable — but police use it anyway

ADVERTISEMENT

Can police use facial recognition as probable cause? Probably not, but some are doing it anyway

Facial recognition became a tool for police investigations as early as 2001 – and by 2016, one out of every two Americans’ images were in a law enforcement facial recognition database.

Use of facial recognition has soared despite studies showing that it is more likely to misidentify Black and brown people than white people, and incidents where it has led to false arrests.

A new report from the Georgetown Center for Privacy Technology argues that facial recognition has serious flaws that make it unreliable for use in investigations, in sharp contrast to proponents’ claims that facial recognition removes human bias from the process of identifying suspects.

Grid spoke with the report’s author, Clare Garvie, the training and resource counsel at the National Association of Criminal Defense Lawyers, about her findings. This conversation has been edited for length and clarity.

ADVERTISEMENT

Grid: What was the most surprising thing that you found while working on this report?

Clare Garvie: The most surprising thing to me in researching this paper was that face recognition has been used by police as probable cause to make arrests. And most if not all policies that I’ve read, it very clearly states that face recognition is an investigative lead only, and additional investigative steps must be taken before an arrest is made. And yet we found in half a dozen cases that we’re aware of where that’s not what happened — someone was arrested and the only thing tying them to the crime was a face recognition search that turned up their face.

G: What are the human implications of that?

CG: At worst, this could mean that someone was misidentified and arrested for a crime that they didn’t commit. We know this has happened in three cases — Robert Williams and Michel Oliver in Michigan and Nijeer Parks In New Jersey. What we don’t know is how many other cases this has happened in, and that’s because face recognition and the searches that police run are typically not disclosed to the defense [counsel] as part of either regular disclosure, or Brady evidence [of material favorable to the accused], or evidence that’s material to the defense making a case such that they can have a fair trial.

G: Is there a reason for that?

ADVERTISEMENT

CG: The reason is because on paper, face recognition is considered an investigative lead only, and it’s considered to be reliable. It’s considered that it’s not important for law enforcement to release it. And what this paper and I would argue is that that’s not true. There are so many opportunities for mistakes, for cognitive bias, for something to go wrong in the search process, such that disclosure is necessary in order to protect the defendant’s right to a fair trial.

G: What has the review process been like for this technology when it comes to determining its accuracy?

CG: Facial recognition is often talked about as a technology, an algorithm. And there has been a lot of research done around how accurate or not accurate these algorithms are. That’s where the National Institute of Standards and Technology has been very instrumental. With their ongoing face recognition vendor tests, they have given a very, very valuable tool to researchers and police departments and the public for understanding the strengths and limitations of facial recognition algorithms.

However, the algorithm is one piece of a multistep search process, which includes significant degrees of human judgment, psychology and forensic science. And the literature in these fields tell us that humans are not innately good at identifying people by their faces — particularly unfamiliar faces. The degree of cognitive bias that’s present in a given face recognition search raises a serious question about the risk of misidentification and how often the searches, which involve both human and machine, can get it wrong. What this report does is it frames face recognition, not just as a technology, but as a forensic investigative method and analyzes it in from that perspective, to argue that as a forensic investigative tool, it has not been established as a forensically sound, empirically valid method. And as a consequence, it’s been relied on as if it has this reliability which it quite simply does not.

G: Proponents of the technology would argue that this tool removes that human bias in the decision-making process though. What are the ways that human psychology is interacting with this technology?

CG: Historically the “human in the loop,” if you will, has been considered as a valuable check against any error that a facial recognition algorithm may make. But that ignores the fact that humans also make errors and those errors may compound each other. It also ignores the fact that in any given face recognition search, there will be no protection against cognitive bias such as motivation to find a match or external information like an individual’s prior criminal arrest history, which may taint biometric review, if you will, such that a determination of identity is not based merely on the similarities between two faces, but it’s based on whether someone’s been previously arrested for a similar crime, whether an analyst thinks the person looks guilty, or whether the analyst is motivated to find a match.

One of the really interesting pieces of research that is being explored right now is the relationship between the types of errors that algorithms make and the types of errors that humans make — they’re very much the same. Algorithms are more likely to make a misidentification of somebody within the same demographic cohort as the person they’re looking for. So are humans; we are far more likely to mistake somebody for somebody else who’s the same race, sex and age as the person we’re looking for or seeking to identify. Algorithms do the same thing, which means since the human in the loop is going to make the same types of errors it’s not a valuable step to correct for the errors that the algorithm may make.

G: What was the reasoning to produce this report now?

CG: In response to the reports that the Center for Privacy and Technology was putting out about facial recognition, many defense attorneys reaching out to us saying, “Hey, I think I have face recognition in my case, but I don’t know how to argue against it,” or, “How do I know whether face recognition was used in my case?” This was because, one, it was being used in a lot of cases and still is used in a lot of cases and two, has not been disclosed to the defense as a matter of course. Yet defense attorneys were rightly reading these reports and guessing that they could challenge the identification of their client as the main suspect, using this technology.

The defense attorneys started flagging this as a potential issue in their cases, and asking, “What could they do about it? What was the relationship between face recognition and the right to due process?”


ADVERTISEMENT

It became clear to me that there was a lot of literature about answering this question, either with other forensic sciences like latent fingerprints, or probabilistic genotyping, or hair microscopy. It was looking at the limitations of these forensic sciences, computer science, looking at facial recognition algorithms, and cognitive psychology, looking at the limitations of human identification. But no one had put all the pieces together and looked at how face recognition searches are run in operational conditions and the degree to which cognitive psychologists, computer scientists, and forensic scientists were finding out about rates of accuracy and rates of misidentification, to what extent we would find those issues in how face recognition searches were run by law enforcement.

G: What do you think is the scale of this problem, particularly in relation to due process?

CG: I think it’s a huge problem. Facial recognition has been used by police investigations going back to 2001. This is 100,000 cases, maybe more, but we quite simply don’t know, because this information is not disclosed to the defense. I would argue that this means there are 21 years and counting of due process violations on the books in many jurisdictions across the United States. That is a constitutional crisis. That is a crisis for first and foremost the people who are identified or misidentified using the technology, but it’s also a crisis for courts, who certainly don’t want to perpetuate or not catch Brady violations of the right to due process. What happens in a jurisdiction when a court says yes, facial recognition must be disclosed to the defense? What does that mean for the thousands of other cases that have been prosecuted in that jurisdiction where the defense never had the chance to challenge facial recognition? So I view this as a very important due process and constitutional question for the country.

G: What are a couple of the different tools that lawyers have at their disposal to address this?

CG: The main audience for this paper are defense attorneys, but more broadly, judges and prosecutors as well to frame facial recognition not just as a technology issue or a policing issue but as a forensic science and an evidence issue. First and foremost, defense attorneys should feel empowered to request discovery on facial recognition. They should be entitled to information about how their client came to be identified as the main suspect in a case. Following this, it may be appropriate to argue for suppression to argue that the search was unreliable, it was unduly suggestive, or other deficiencies in the search, such that the identification of their client as the main suspect should be suppressed.

ADVERTISEMENT

Another audience for this paper is the research community. The paper doesn’t definitively answer the fundamental question asked, which is, how reliable is a lead generated by a facial recognition search? What it does is it hopefully provides a road map for researchers and users of facial recognition to take a close look at the avenues through which mistakes can creep into a face recognition search and to start thinking about what does a study that would generate a reliability metric for these searches look like? Is it even possible to create such a study? Or do we need to put parameters in place like eliminating the use of Photoshop in facial recognition searches [as the New York Police Department has done] before we can actually begin to understand how reliable this forensic this forensic investigative method is?

G: What are your concerns in this area moving forward?

CG: What keeps me up at night is I believe we are a few months to a few years away from facial recognition being introduced as identity evidence in court. That to me is a worst-case scenario. Facial recognition is not necessarily a reliable way to identify somebody. And the moment a court accepts it as evidence, that means we’ve lost any ability to argue that it is not reliable, because regardless of what the science says is the courts will follow case precedent — that’s called sort of the judicial certification of bad science.

I’m very worried about this, because I think once the law certifies it, it becomes that much more difficult to walk back our analysis and it becomes that much more dangerous such that we will have people wrongfully arrested, take plea deals, even though they’re innocent, or wrongfully convicted of crimes, because they were misidentified using facial recognition.

Thanks to Dave Tepps for copy editing this article.

  • Benjamin Powers
    Benjamin Powers

    Technology Reporter

    Benjamin Powers is a technology reporter for Grid where he explores the interconnection of technology and privacy within major stories.