• Skip to content

Ethical Imagination

Ethical Imagination & Technoscientific Practices

  • Home
  • Cases
  • Glossary
  • Bibliography
  • About

Featured

May 11 2022

Facial Recognition and Policing

 

 

Facial Recognition and Policing

Facial Recognition Technologies (FRT) are increasingly being used in policing, but have been plagued with problems due to biases in data. While reducing such bias is important , taking a step back and asking some fundamental questions can tackle some of the underlying assumptions behind this goal: Is it even necessary to use FRT? Can an ideal FRT even be built?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

Facial Recognition Technology (FRT) aims to automate the identification of individuals in images and videos by using machine learning to read their images/videos and compare them to a large database. It is increasingly being used for law enforcement activities such as for identifying and tracking potential criminal suspects or for confirming people’s identity in digital and physical spaces. However, a key issue of FRT is that it has been significantly less accurate for people of color, particularly Black Women, as compared to White men. Coupled with racial injustices in the policing system, FRT can lead to wrongful arrests and misidentifications prejudiced against Black people. For example, in 2020 FRT was used to wrongfully arrest Robert Williams, a Black man, for stealing merchandise from a store in Detroit based on an examination of grainy surveillance footage. While the case was soon dismissed by a local court due to a lack of sufficient evidence, charges of arrest could still persist on Williams’ record, inhibiting his employment prospects. 

Part of the reason why FRT has not worked well for people of all demographics has been due to biases in the data used to train the machine learning algorithms underlying it. Predatory policing of black people has led to their disproportionate incarceration and subsequently elevated the proportion of images such as mugshots that implicate Black people as criminals. When this biased image data is fed into the algorithms underlying FRT, the technology becomes more likely to wrongfully implicate a Black person drawing on the fact that Black people are disproportionately incarcerated. Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias, i.e., to make them more accurate at identifying all kinds of people—“How can we reduce bias in our facial recognition algorithms?”

 

 


Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias

 

 

Concerns and Considerations

While it is certainly important for technologies to not discriminate over race, sex, or other social differences, there are two key concerns with this framing. First, it assumes that an ideal FRT—one that could always correctly identify individuals in any image/video—can be unproblematically built. Second, it uncritically accepts that an ideal FRT is necessary and good for public safety. However, there are multiple issues with this assumption.

First, FRT needs to be trained on large databases of images and videos of all people. Building such a database would require mass surveillance of the population regardless of their consent, as every individual’s facial data would be needed to ensure accuracy. This is problematic as it would require invading people’s privacy and overriding their autonomy. On the other hand, if the process of building such databases required informed consent, then it is entirely likely that people can opt-out of it and therefore prevent FRT from recognizing them if needed in the future, thereby rendering the technology less effective. 

Second, such a framing accepts that facial recognition is worth developing for the sake of public safety, as it can enable stronger law enforcement. This approach, however, fails to tackle the root causes underlying “illegal” activity such as systemic injustice and poverty, or the causes for the disproportionate incarceration of Black people, such as overpolicing. Further, as history has shown, those who intend to commit a crime will find new ways of doing it that thwart advancements in FRT, such as by wearing masks. Consequently, FRT only adds to the arms race between the police and “criminals” while contributing little towards eradicating the underlying societal problems.

 

 


Instead of asking “how can we reduce the biases of FRT?” why not try to tackle systemic injustice in policing? For example, “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary arrests and violence?

 

 

Reframing the Problem

Drawing on the above concerns and considerations, there are several questions that can be asked to reframe the situation. Instead of asking “how can we reduce the biases of FRT?” one can ask “how can we use the inherent biases in FRT as a lens to examine underlying societal problems?” This positions biases in data as a strength rather than a weakness and leads to additional questions: “If there are disproportionately more Black people in the database, why is that the case?” and “Can examining how the database is constructed reveal problematic biases in society and law enforcement?”

Other directions can also be pursued that challenge not only FRT but also inherent problematic beliefs in law enforcement: “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary violence?” or “Can there be a more local, communal, and democratic approach to reducing crime in a neighborhood that reduces the need for overpolicing?” 

Examining how FRT is situated in a broader societal context and reframing the problems accordingly as suggested above is necessary for ethical practice. It helps identify problematic assumptions upon which existing questions lie, and sets a more robust ethical base upon which to pursue new ideas and designs. The intent is not to inhibit the growth of technologies, but rather to make the effort worthwhile in a manner that advances both the designer’s goals as well as democratic values in society.

Written by aanupam3 · Categorized: Contemporary, Featured

May 08 2022

Student Monitoring and Remote Tests

 

 

Student Monitoring and Remote Tests

Monitoring students in online tests may be necessary to stop them from cheating, but it can also affect their mental health. But instead of falling into this dilemma, why not focus on a broader question: how can we change testing to actually help students rather than to simply assess them?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

As online and remote learning options become more viable, so does the need to remotely assess students on their learning. However, exams and tests given remotely during the early stages of the COVID-19 pandemic saw a significant rise in cheating by students. Students employed a variety of means such as googling the answers, asking others in online forums and discussion boards, and hiding physical notes around them. This increase in cheating triggered a response by educational institutes in the form of proctoring and tracking tools which required students and screens to be continuously monitored as they gave their tests (Subin, 2021). These tools have been purported to catch several instances of cheating that would otherwise have gone unnoticed (Harwell, 2020).

However, such invasive monitoring can also be detrimental to students’ mental health and intrude on their privacy. Being continually watched—especially one may be wrongfully accused of cheating due to “unusual” eye and head movements—has heightened stress and anxiety in many students. Further, the extensive data collected by such tools could also be hacked, exposing students’ private data (Harwell, 2020). This situation raises an important question: “Is it worth invading students’ privacy to prevent cheating in remote tests?”

 


Invading student’s privacy to prevent them from cheating can be detrimental to their mental health

 

 

Concerns and Considerations

There are two key concerns with this framing: its dichotomization of the issue, and is its assumption that cheating is symptomatic of a problem with the student, rather than with the culture of assessment and education. 

First, this framing of the problem makes it seem that the situation requires a trade-off between allowing cheating and invading privacy. Such dichotomization is problematic because it foregoes other possibilities, such as designing assessments where “cheating” has no meaning. For example, consider project-based assessments where students have to identify problems in their local environments and design approaches to them using the subject-matter learned in class. Such assignments cannot be “cheated” on as there is no “correct” answer to cheat for. The problem itself is ambiguous and evolves over time. Nor is it “cheating” to ask for help from others (parents, friends) because learning to ask for support and working with others is often necessary to resolve local problems. Such an approach is also advantageous as it encourages students to learn how to develop problems and apply what they have learned to them, which is more aligned with professional practice. 

Second, this framing places the blame for cheating squarely on the students, ignoring how the design of the educational/assessment system itself can contribute to cheating, especially for struggling students who do not have adequate support. One of the primary reasons students cheat on tests is to avoid failure. This is partly the result of a flawed assessment culture that punishes failure on tests rather than using it as an opportunity for learning and growth. For example, failing a test often means repeating the class in its entirety, rather than getting support on those specific areas that one struggles with.

 

 


Instead of tests, the class as a whole could aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating”

 

 

 

Reframing the Problem

Drawing on these concerns, there are several possible avenues for reframing the above problem. As discussed above, one could ask, “how can we design better ways of assessing students to support learning and growth?” This would foster exploration of assignments that use evaluation as an intermediary step towards learning, rather than as a way of categorizing students by skill or ability. 

One could also question the inherent mistrust placed in the students implied in the original framing and instead focus on community building: “how can we develop a community of learning where teachers and students learn from each other?” Such a question explores the possibility of the class functioning as a team rather than as an aggregate of individuals. For example, the class as a whole can aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating” as individual students are not judged for their skill, but rather on how they hone those skills to contribute towards resolving the class problem and are willing to support others.

Questions can also be asked of the broader educational/assessment culture: “how can we re-design educational environments to better support struggling students?” Such an approach shifts the conversation from catching and punishing struggling students who see no option but to cheat, to identifying and supporting struggling students early on without discrediting or mistrusting them. For example, shifting from a few high stakes tests to regular low stakes assignments that are iterative in nature can allow students to revise and learn from their mistakes. Particularly, it gives struggling multiple chances to improve their grade without being punished for not doing well.

Written by iacosta6 · Categorized: Contemporary, Featured

Copyright © 2025 · Altitude Pro on Genesis Framework · WordPress · Log in

Sponsored by Cisco