Bullying at School
Bullying at School
Approaches to ethical problems often tend to focus on bettering the symptoms or “side-effects” of the problems rather than its root causes. While adopting this approach can be useful for some situations, it often does little to actually stop the problem from happening in the first place, and may even lead to other problems along the way. This case examines how such approaches can be limited in the case of bullying at school and suggest alternative pathways that attack the problem at its roots.
Dominant Framing of the Problem
Bullying is a concern in many social environments, especially schools. Take the following case as an example (this is a fictional example drawn from real world stories linked below). A young middle school girl lost her hair due to a condition called alopecia. She was regularly bullied at school. Some students called her “ugly” or “baldy,” while others did not want to play with her because they disliked her appearance. She tried wearing wigs, but students often made fun of that too, and tried to pull at it or tear it off. Further, due to peer pressure, those who tried become her friend were also bullied or ostracized for it. Consequently, she has almost no friends at school.
Her parents were concerned that the school is not doing enough to prevent bullying. In response, the school is considering two options. One idea is to set up a counseling service to help children like her cope with bullying. The other is to install cameras to catch instances of bullying so that they can act in the moment, as well as dissuade bullying through student monitoring. What should be done?
Approaches to ethical problems often tend to focus on bettering the symptoms or “side-effects” of the problems rather than its root causes.
Concerns and Considerations
While both the above approaches—counseling services and cameras—have the potential to reduce bullying, they each make problematic assumptions about bullying that can in turn, lead to other problematic consequences. Notably, they both focus on the symptoms of bullying, rather than its roots.
The key assumption behind the counseling service approach is that the problem resides in the individual being bullied (or that the problem is easiest to resolve by focusing on the bullied person). If those who are bullied can stand up for themselves or deal better with being bullied, then they can overcome bullying. This is a problematic assumption as it places the burden of change on those who are the most vulnerable—the victims of bullying—rather than the bullies, the culture of bullying, or the school policies. It makes it seem that bullying is a problem more about one’s “feeling’s being hurt” rather than a problem that can impact one’s mental and physical health and stymie their educational progress. By only acting on the effects of bullying and not its causes, this approach can inadvertently help legitimize bullying and do little to curb it.
On the hand, extensive student monitoring can indeed prevent bullying, but it can also harm students in other ways. The assumption here is that monitoring students can help identifying when and where bullying occurs and allow authorities to act on it. However, this is problematic as constant monitoring can lead to a “chilling effect” where students become hesitant to express themselves, ask risky/probing questions, or take on activities where they are likely to fail.. Even if students are not aware they are being watched (e.g., through hidden cameras), this approach still invades their right to privacy. It is effectively a form of spying that assumes that students cannot be trusted. Both cases can also exacerbate discrimination against students from marginalized communities as they are more likely to be observed with a lens of suspicion and punished than their peers. Further, the data gathered from student surveillance also risks being misused if it gets into the hands of the third-parties who may try to sell it for profit. Overall, this approach attempts to stop bullying by catching it as it happens, rather than identifying its root causes and focusing on those.
Exploring the causes of a problem rather than only its symptoms expands the space of possible creative resolutions in ethics and in design.
Reframing the Problem
Given the concerns associated with the two approaches discussed above, what can one do to curb bullying at school? There are multiple examples of creative approaches taken by teachers and students that help reduce bullying by attacking its various causes, rather than attempting to curb its symptoms. Such approaches often take advantage of the local circumstances and specificities of the problem of bullying rather than take a more standardized or universal approach. Two stories illustrate this approach in relation to bullying that targets students’ baldness.
In one case, a teacher realized that their bald student was being bullied because they were seen as being “different” from the others. Rather than counsel or monitor their students, the teacher decided to have their own head be shaved bald in a show of solidarity with the bald student. This demonstrated to the other students that there is nothing wrong with “being bald” and that bald people are not “different” or “less” than others in any way. Upon learning this, many other students also decided to shave their head and became supportive of their bald peer.
In another case, a nine year old girl with alopecia was being bullied for her condition. She and her teachers realized that the students who bullying her did so because they did not know why she was losing her hair. They believed that her condition was her own fault. Focusing on this issue, the student and her teachers organized an assembly where she gave a presentation on alopecia to her peers in order to educate them about it. “If I have hair or not, I’m still the same person that I’ve always been” she said. This approach not only helped reduce her bullying, but also allowed her to tell her own story (rather than someone standing up for her) in a manner that helped her voice her concerns and maintain her dignity.
While such approaches may not work everywhere, the point here is to illustrate how thinking about the causes of social problems such as bullying is necessary to resolving them. Focusing on the symptoms and developing standardized solutions may help in some cases, but they must be supplemented with a deeper investigation of the issue in relation to the specifics of the situation.
References
https://www.kswo.com/2018/09/24/lawton-girl-with-alopecia-shares-story-about-being-bullied/
https://www.heart.co.uk/news/feelgood/teacher-shaved-head-bullying/
The Heinz Dilemma
The Heinz Dilemma
Most ethics cases are presented in the form of “dilemmas” with clear cut choices in an attempt to “focus” on the core problem. Should we steal or not? Should we prioritize the many over the few? Should we focus on long-term or the short-term gains? The Heinz dilemma is a popular example of this approach. Here we highlight how it can limit our ability to imagine alternate possibilities.
Dominant Framing of the Problem
The Heinz Dilemma (and its variants) are commonly used in ethics courses as a way of inviting students to think about moral dilemmas. One version of the dilemma proceeds as follows: A woman is lying very sick at the hospital. The doctor says that she needs to be given a very rare and expensive drug as soon as possible in order to survive. Her husband Heinz rushes to the drug store to try and buy the drug. But talking with the druggist, he learns that he cannot afford it. He pleads with the druggist to give him the drug arguing that he will pay for it later. However, the druggist refuses stating that they deserve to be paid for creating such a rare drug and cannot just hand it out. If they give it to Heinz for free, then soon everyone in a similar situation will come asking for it. Heinz tries to gather money from his friends and relatives, but he still cannot afford the drug. He steps outside and paces around anxiously.
It is now near closing time. Heinz realizes that he can break into the store by picking the lock on the door once the druggist leaves. He is now pondering a dilemma: Should he steal the drug? Or should he let his wife die?
Different ethical theories are usually applied to help students think about the situation. A utilitarian approach might argue that Heinz should break into the store as the happiness that he and his wife will gain by doing so outweighs the risk of being put in prison (as well as the sadness/anger of the druggist). A Kantian approach might argue that stealing in principle is morally wrong and therefore Heinz has no right to break into the store, regardless of the consequences.
Most ethics cases are presented in the form of “dilemmas” with clear cut choices in an attempt to “focus” on the core problem
Concerns and Considerations
By limiting the possible options available, the dilemma attempts to bring to focus the core problem here: when is it justifiable to steal? While this can help students understand the value of ethical theories to such dilemma-like situations, this approach has two key concerns.
First, this approach relies on a predetermined and reductive framing of the situation. Should Heinz steal or not? Real-life situations are rarely ever reducible to such dilemmas. Instead, they are rich with possible alternative pathways. Focusing on such reductive dilemmas can limit ethical reasoning to a theoretical exercise, rather than a practice grounded in the specificities of the situation.
Second, the dilemma legitimizes both choices as being equal and valid. Stealing can be justified on the basis of one theory, while not stealing on the basis of another. Such “both-siding” can inadvertently lead to relativistic thinking, giving the illusion that ethics is a purely subjective exercise. In practice however, this is again rarely the case. The goal ethical inquiry is not to provide justification for all choices, but to help one investigate the situation for more information so that a concrete decision can be made. This involves strategies such as developing more nuanced criteria for examining the situation, researching more about possible outcomes, learning about perspectives one may not have considered earlier and so on. The predetermined legitimization of choices in such dilemmas is precisely what ethical reasoning aims to dissolve.
Thinking of real situations as dilemmas forecloses possibilities that can serve as practical resolutions
Reframing the Problem
There are several possibilities foreclosed by such a reductive framing. Can the husband negotiate a payment installment plan with the druggist? Can the doctors find a temporary solution to delay death until the medicine can be obtained? Can he steal the medicine and the medicine seller be compensated by insurance? By focusing just on the two choices: should he steal it or not, the Heinz dilemma limits our ways of thinking about the situation and forecloses options. Drawing on the capacious space of possibilities, the problem can be reframed in several ways: “What can Heinz’s friends do to help him? How can the justice system prevent such price gouging? What other modes of payment can the doctor be persuaded to accept?” and so on. Probing the specificities, perspectives, uncertainties, and structures underlying the situation is necessary to make an informed ethical decision of the way forward, not limiting options and legitimizing them all equally.
Self-Driving Cars and Algorithmic Morality
Self-Driving Cars and Algorithmic Morality
The discourse around self-driving cars has been dominated by an emphasis on their potential to reduce the number of accidents. At the same time, proponents acknowledge that self-driving cars would inevitably be involved in fatal accidents where moral algorithms would decide the fate of those involved. This is a necessary trade-off, proponents suggest, in order to reap the benefits of this new technology. Yet, if we look closely, we find underlying assumptions that obscure important ethical and political nuances and undermine the significance of human life and living.
Dominant Framing of the Problem
The ethical problems associated with self-driving are commonly framed in terms of the Trolley Problem (and its variations). This problem puts forth a hypothetical situation in which you find yourself driving a trolley at 60 miles an hour toward five workers on a track. You step on the brakes only to find that they are broken. Staying on this path, it is inevitable that you will kill all the five workers. At this moment, you notice a sidetrack where only one worker is standing. By veering off to this sidetrack, you will kill only one worker. What will you do? What if instead you could push a fat man in the path of the trolley to stop it? What would you do then? While the goal of such hypothetical scenarios is to help analyze pressing ethical issues ranging from abortion to insurance policies, the scenarios themselves are improbable or very unlikely. In the context of designing self- driving cars, however, these scenarios are often taken very literally, with the trolley being seen as a stand-in for a self-driving car whose algorithm must make an ethical decision in such a situation.
It is therefore, important both to take the challenges posed by the Trolley problem seriously and to engage the apparent contradiction in peoples’ responses to them, questioning trolley experiments as templates for algorithmic morality. More specifically, we need to question three dominant assumptions:
- That the principles upheld by experimental ethics approaches such as the Trolley Problem sufficiently resemble real ethical situations
- That identifying or agreeing on a set of principles is sufficient for creating moral algorithms that adequately control the behavior of self-driving cars
- That we can accept algorithmic morality in the context of self-driving cars, given that by adopting such care, in theory, we save more lives than we lose.
Hypothetical situations suggested by the Trolley Problem are being taken literally in the case of self-driving cars
Concerns and Considerations
First, is it reasonable to assume that principles upheld by experimental ethics are sufficient to resolve those same ethical situations if faced in real life? No. To illustrate why not, we can highlight the improbable and binary framing of these scenarios, noting how distant they are from actual life situations. As a result, the values they uphold may or may not sufficiently serve lived ethical situations. Consider, for example, the fact that we do not learn any details about the five workers in the original framing of the trolley problem: Who are they? Are they young or old? How are they positioned in relationship to the trolley? Are they capable of seeing, hearing, and reacting to the trolley as it approaches them? Would I, the trolley driver, be unfairly targeting the one worker, given that he is just a bystander in the situation until I decide to target him, thus giving him a lot less time to reflect and react? These and other similar questions point to the uncertain, complex, and living nature of the situation that I, the driver of the trolley, could plausibly be facing. As a result, while it may seem that the utilitarian principle of “saving most lives” is applicable when presented with the simplified version of the scenario, this principle may or may not serve in an actual lived situation.
Is identifying or agreeing on “a” set of principles for creating moral algorithms that adequately control the behavior of self-driving cars” sufficient? Also no. Solving ethical problems in not a matter of deciding which principles should be upheld, since every principle is rendered differently based on the situation. Resolving or addressing ethical situations therefore involves deliberating on what the situation is and what the the pertinent values entail in action. Say, for example , that we agree on the principle of maximizing life (defined in terms of life expectancy). Now imagine the self-driving car is about to have an accident and is faced with two choices: an older adult on one side and a young child on the other. Here, interpreting the principle in terms of life expectancy would result in deciding to save the child’s life. In another scenario, what if there is a teenager on one side and an older woman pushing a stroller on the other? What does maximizing life entail in this case? The teenager may have more life expectancy, but they may also be able to react faster and move out of the way better than the old lady. This would entail veering towards the teenager instead, potentially saving everyone and “maximizing life”. Given the relevance of the details of the situation to any resolution of the problem, depending on abstract representations such as those emerging from the trolley problem inherently limits our ability to think critically about design.
Finally, in spite of the fundamental limitations and potential for error, the uncertainties involved, and the unique nature of every situation, proponents still argue that we should favor self-driving cars as they are better than humans at handling these problems and will therefore save more lives than otherwise. But is this a valid argument? The answer to this question is again, no. This is because the lives lost due to algorithms that have systemic biases and limitations are different from lives lost due to human driver error. The behaviors of algorithms are decided a priori by those who have power over designing them, while the behaviors of human drivers are not pre-determined. In the previous example, older people would unfairly be targeted every time if “maximizing life” (understood in terms of one’s age) is taken as the core principle. This is especially egregious as such older people may have had no say in even deciding the principle of “maximizing life” in the first place.
We must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day.
Reframing the Problem
What if we decide that moral algorithms are entirely unacceptable? What if we choose to reject them no matter what? A genuine concern with the many lives lost in car accidents now and in the future—a concern that transcends false binary framings of technology enthusiasts’ opportunity-centered approach—and the liability considerations of the automotive industry, could serve as a starting point to rethink mobility as it connects to the design of our cities and the future of our communities. Indeed, a serious consideration of the ethical issues raised by self-driving cars opens a space for design that is innovative, inclusive, and mindful of both immediate and broad consequences. We can gesture towards multiple possibilities for the kinds of radical rethinking of this design space such as by examining how our cities were to be transformed if self-driving cars were to inhabit it and imagining alternative transportation systems for more equitable and just cities.
For the former, we must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day. The decision to accept algorithmic morality cannot be simply made based on the number of lives potentially saved. How much of public funds and public spaces would have to be devoted to smart cars and to whose benefit? How would it impact the legal system, one that is already biased against bystanders, bikers, and small children (Jain 2004)? What would it be like to live in a city where at every moment you might be the target of an accident decided solely by the preprogrammed logic of an algorithm? Would it be safer to travel with a stroller at all times to assure safety?
How did we get to be so reliant on cars for our daily transportation? How did the many deaths as a result of car accidents become normalized? While the forces of the historic trajectories that led to car-centric cities and lifestyles are strong, we are not bound to follow them. To be sure, there are other models and trajectories that we can draw upon. Examples include the Netherlands’ bike-friendly laws, public policy that resulted in a reliable and safe biking infrastructure, or the more recent Swedish Vision Zero with the ultimate target of redesigning urban infrastructures as to entirely eliminate deaths or serious injuries by cars. We might indeed think of the introduction of self-driving cars as an occasion for a radical rethinking of mobility that challenges and reorients dominant car-centric visions. For this, however, we need to engage multiple disciplinary and social and historical perspectives and to embrace more nuanced framings of the problems of mobility.
Facial Recognition and Policing
Facial Recognition and Policing
Facial Recognition Technologies (FRT) are increasingly being used in policing, but have been plagued with problems due to biases in data. While reducing such bias is important , taking a step back and asking some fundamental questions can tackle some of the underlying assumptions behind this goal: Is it even necessary to use FRT? Can an ideal FRT even be built?
Dominant Framing of the Problem
Facial Recognition Technology (FRT) aims to automate the identification of individuals in images and videos by using machine learning to read their images/videos and compare them to a large database. It is increasingly being used for law enforcement activities such as for identifying and tracking potential criminal suspects or for confirming people’s identity in digital and physical spaces. However, a key issue of FRT is that it has been significantly less accurate for people of color, particularly Black Women, as compared to White men. Coupled with racial injustices in the policing system, FRT can lead to wrongful arrests and misidentifications prejudiced against Black people. For example, in 2020 FRT was used to wrongfully arrest Robert Williams, a Black man, for stealing merchandise from a store in Detroit based on an examination of grainy surveillance footage. While the case was soon dismissed by a local court due to a lack of sufficient evidence, charges of arrest could still persist on Williams’ record, inhibiting his employment prospects.
Part of the reason why FRT has not worked well for people of all demographics has been due to biases in the data used to train the machine learning algorithms underlying it. Predatory policing of black people has led to their disproportionate incarceration and subsequently elevated the proportion of images such as mugshots that implicate Black people as criminals. When this biased image data is fed into the algorithms underlying FRT, the technology becomes more likely to wrongfully implicate a Black person drawing on the fact that Black people are disproportionately incarcerated. Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias, i.e., to make them more accurate at identifying all kinds of people—“How can we reduce bias in our facial recognition algorithms?”
Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias
Concerns and Considerations
While it is certainly important for technologies to not discriminate over race, sex, or other social differences, there are two key concerns with this framing. First, it assumes that an ideal FRT—one that could always correctly identify individuals in any image/video—can be unproblematically built. Second, it uncritically accepts that an ideal FRT is necessary and good for public safety. However, there are multiple issues with this assumption.
First, FRT needs to be trained on large databases of images and videos of all people. Building such a database would require mass surveillance of the population regardless of their consent, as every individual’s facial data would be needed to ensure accuracy. This is problematic as it would require invading people’s privacy and overriding their autonomy. On the other hand, if the process of building such databases required informed consent, then it is entirely likely that people can opt-out of it and therefore prevent FRT from recognizing them if needed in the future, thereby rendering the technology less effective.
Second, such a framing accepts that facial recognition is worth developing for the sake of public safety, as it can enable stronger law enforcement. This approach, however, fails to tackle the root causes underlying “illegal” activity such as systemic injustice and poverty, or the causes for the disproportionate incarceration of Black people, such as overpolicing. Further, as history has shown, those who intend to commit a crime will find new ways of doing it that thwart advancements in FRT, such as by wearing masks. Consequently, FRT only adds to the arms race between the police and “criminals” while contributing little towards eradicating the underlying societal problems.
Instead of asking “how can we reduce the biases of FRT?” why not try to tackle systemic injustice in policing? For example, “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary arrests and violence?
Reframing the Problem
Drawing on the above concerns and considerations, there are several questions that can be asked to reframe the situation. Instead of asking “how can we reduce the biases of FRT?” one can ask “how can we use the inherent biases in FRT as a lens to examine underlying societal problems?” This positions biases in data as a strength rather than a weakness and leads to additional questions: “If there are disproportionately more Black people in the database, why is that the case?” and “Can examining how the database is constructed reveal problematic biases in society and law enforcement?”
Other directions can also be pursued that challenge not only FRT but also inherent problematic beliefs in law enforcement: “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary violence?” or “Can there be a more local, communal, and democratic approach to reducing crime in a neighborhood that reduces the need for overpolicing?”
Examining how FRT is situated in a broader societal context and reframing the problems accordingly as suggested above is necessary for ethical practice. It helps identify problematic assumptions upon which existing questions lie, and sets a more robust ethical base upon which to pursue new ideas and designs. The intent is not to inhibit the growth of technologies, but rather to make the effort worthwhile in a manner that advances both the designer’s goals as well as democratic values in society.