What is it about?

We often need to verify an individual's identity from their facial appearance. One common method of verification is the "one-to-one matching task", in which an observer is asked to decide whether a photo ID document (e.g., a passport or driver's licence) matches the person presenting it for inspection. Although this is a common task, average human performance is surprisingly poor - with error rates regularly between 20-30%. Recent technological advances mean that many facial recognition algorithms now outperform the average human on these verification tasks. Even so, often the human operator is responsible for reviewing the algorithm's response and then making the final identification decision. Despite such arrangements already being used for identity verification, very little is known about the collaborative performance of these human-algorithm teams. Here we investigate how knowing the decision of a facial recognition system influences the final identification decision made by the human operator in a one-to-one face matching task.

Featured Image

Why is it important?

We show that although humans can use the decisions from highly accurate facial recognition algorithms to improve their own performance, the decisions they make with the help of the system are actually less accurate than those the system makes alone. In other words, humans often failed to correct errors made by the facial recognition system, but also overruled many of the algorithm's correct decisions. While human oversight of facial recognition algorithms is vital, our research suggests that human ability might be a factor limiting the effectiveness of the human-algorithm team. Our findings have implications for the effective implementation and oversight of facial recognition technologies.

Perspectives

Human oversight over facial recognition technologies is clearly necessary. However, this arrangement is predicated on the assumption that humans will detect and correct errors from the algorithm, while also accepting the algorithm's correct decisions. Yet, decades of research has shown that humans are liable to make errors when matching unfamiliar faces (and often at higher rates than many modern facial recognition algorithms). I hope this article encourages readers to question the assumptions that are built into this model of "human-in-the-loop" algorithm oversight for face matching applications, and to reflect on the difficulties of implementing meaningful algorithm oversight that improves the overall accuracy of the system.

Daniel Carragher

Read the Original

This page is a summary of: Simulated automated facial recognition systems as decision-aids in forensic face matching tasks., Journal of Experimental Psychology General, December 2022, American Psychological Association (APA),
DOI: 10.1037/xge0001310.
You can read the full text:

Read

Contributors

The following have contributed to this page