Facial Recognition and Policing
Facial Recognition Technologies (FRT) are increasingly being used in policing, but have been plagued with problems due to biases in data. While reducing such bias is important , taking a step back and asking some fundamental questions can tackle some of the underlying assumptions behind this goal: Is it even necessary to use FRT? Can an ideal FRT even be built?
Dominant Framing of the Problem
Facial Recognition Technology (FRT) aims to automate the identification of individuals in images and videos by using machine learning to read their images/videos and compare them to a large database. It is increasingly being used for law enforcement activities such as for identifying and tracking potential criminal suspects or for confirming people’s identity in digital and physical spaces. However, a key issue of FRT is that it has been significantly less accurate for people of color, particularly Black Women, as compared to White men. Coupled with racial injustices in the policing system, FRT can lead to wrongful arrests and misidentifications prejudiced against Black people. For example, in 2020 FRT was used to wrongfully arrest Robert Williams, a Black man, for stealing merchandise from a store in Detroit based on an examination of grainy surveillance footage. While the case was soon dismissed by a local court due to a lack of sufficient evidence, charges of arrest could still persist on Williams’ record, inhibiting his employment prospects.
Part of the reason why FRT has not worked well for people of all demographics has been due to biases in the data used to train the machine learning algorithms underlying it. Predatory policing of black people has led to their disproportionate incarceration and subsequently elevated the proportion of images such as mugshots that implicate Black people as criminals. When this biased image data is fed into the algorithms underlying FRT, the technology becomes more likely to wrongfully implicate a Black person drawing on the fact that Black people are disproportionately incarcerated. Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias, i.e., to make them more accurate at identifying all kinds of people—“How can we reduce bias in our facial recognition algorithms?”
Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias
Concerns and Considerations
While it is certainly important for technologies to not discriminate over race, sex, or other social differences, there are two key concerns with this framing. First, it assumes that an ideal FRT—one that could always correctly identify individuals in any image/video—can be unproblematically built. Second, it uncritically accepts that an ideal FRT is necessary and good for public safety. However, there are multiple issues with this assumption.
First, FRT needs to be trained on large databases of images and videos of all people. Building such a database would require mass surveillance of the population regardless of their consent, as every individual’s facial data would be needed to ensure accuracy. This is problematic as it would require invading people’s privacy and overriding their autonomy. On the other hand, if the process of building such databases required informed consent, then it is entirely likely that people can opt-out of it and therefore prevent FRT from recognizing them if needed in the future, thereby rendering the technology less effective.
Second, such a framing accepts that facial recognition is worth developing for the sake of public safety, as it can enable stronger law enforcement. This approach, however, fails to tackle the root causes underlying “illegal” activity such as systemic injustice and poverty, or the causes for the disproportionate incarceration of Black people, such as overpolicing. Further, as history has shown, those who intend to commit a crime will find new ways of doing it that thwart advancements in FRT, such as by wearing masks. Consequently, FRT only adds to the arms race between the police and “criminals” while contributing little towards eradicating the underlying societal problems.
Instead of asking “how can we reduce the biases of FRT?” why not try to tackle systemic injustice in policing? For example, “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary arrests and violence?
Reframing the Problem
Drawing on the above concerns and considerations, there are several questions that can be asked to reframe the situation. Instead of asking “how can we reduce the biases of FRT?” one can ask “how can we use the inherent biases in FRT as a lens to examine underlying societal problems?” This positions biases in data as a strength rather than a weakness and leads to additional questions: “If there are disproportionately more Black people in the database, why is that the case?” and “Can examining how the database is constructed reveal problematic biases in society and law enforcement?”
Other directions can also be pursued that challenge not only FRT but also inherent problematic beliefs in law enforcement: “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary violence?” or “Can there be a more local, communal, and democratic approach to reducing crime in a neighborhood that reduces the need for overpolicing?”
Examining how FRT is situated in a broader societal context and reframing the problems accordingly as suggested above is necessary for ethical practice. It helps identify problematic assumptions upon which existing questions lie, and sets a more robust ethical base upon which to pursue new ideas and designs. The intent is not to inhibit the growth of technologies, but rather to make the effort worthwhile in a manner that advances both the designer’s goals as well as democratic values in society.