Self-Driving Cars and Algorithmic Morality
The discourse around self-driving cars has been dominated by an emphasis on their potential to reduce the number of accidents. At the same time, proponents acknowledge that self-driving cars would inevitably be involved in fatal accidents where moral algorithms would decide the fate of those involved. This is a necessary trade-off, proponents suggest, in order to reap the benefits of this new technology. Yet, if we look closely, we find underlying assumptions that obscure important ethical and political nuances and undermine the significance of human life and living.
Dominant Framing of the Problem
The ethical problems associated with self-driving are commonly framed in terms of the Trolley Problem (and its variations). This problem puts forth a hypothetical situation in which you find yourself driving a trolley at 60 miles an hour toward five workers on a track. You step on the brakes only to find that they are broken. Staying on this path, it is inevitable that you will kill all the five workers. At this moment, you notice a sidetrack where only one worker is standing. By veering off to this sidetrack, you will kill only one worker. What will you do? What if instead you could push a fat man in the path of the trolley to stop it? What would you do then? While the goal of such hypothetical scenarios is to help analyze pressing ethical issues ranging from abortion to insurance policies, the scenarios themselves are improbable or very unlikely. In the context of designing self- driving cars, however, these scenarios are often taken very literally, with the trolley being seen as a stand-in for a self-driving car whose algorithm must make an ethical decision in such a situation.
It is therefore, important both to take the challenges posed by the Trolley problem seriously and to engage the apparent contradiction in peoples’ responses to them, questioning trolley experiments as templates for algorithmic morality. More specifically, we need to question three dominant assumptions:
- That the principles upheld by experimental ethics approaches such as the Trolley Problem sufficiently resemble real ethical situations
- That identifying or agreeing on a set of principles is sufficient for creating moral algorithms that adequately control the behavior of self-driving cars
- That we can accept algorithmic morality in the context of self-driving cars, given that by adopting such care, in theory, we save more lives than we lose.
Hypothetical situations suggested by the Trolley Problem are being taken literally in the case of self-driving cars
Concerns and Considerations
First, is it reasonable to assume that principles upheld by experimental ethics are sufficient to resolve those same ethical situations if faced in real life? No. To illustrate why not, we can highlight the improbable and binary framing of these scenarios, noting how distant they are from actual life situations. As a result, the values they uphold may or may not sufficiently serve lived ethical situations. Consider, for example, the fact that we do not learn any details about the five workers in the original framing of the trolley problem: Who are they? Are they young or old? How are they positioned in relationship to the trolley? Are they capable of seeing, hearing, and reacting to the trolley as it approaches them? Would I, the trolley driver, be unfairly targeting the one worker, given that he is just a bystander in the situation until I decide to target him, thus giving him a lot less time to reflect and react? These and other similar questions point to the uncertain, complex, and living nature of the situation that I, the driver of the trolley, could plausibly be facing. As a result, while it may seem that the utilitarian principle of “saving most lives” is applicable when presented with the simplified version of the scenario, this principle may or may not serve in an actual lived situation.
Is identifying or agreeing on “a” set of principles for creating moral algorithms that adequately control the behavior of self-driving cars” sufficient? Also no. Solving ethical problems in not a matter of deciding which principles should be upheld, since every principle is rendered differently based on the situation. Resolving or addressing ethical situations therefore involves deliberating on what the situation is and what the the pertinent values entail in action. Say, for example , that we agree on the principle of maximizing life (defined in terms of life expectancy). Now imagine the self-driving car is about to have an accident and is faced with two choices: an older adult on one side and a young child on the other. Here, interpreting the principle in terms of life expectancy would result in deciding to save the child’s life. In another scenario, what if there is a teenager on one side and an older woman pushing a stroller on the other? What does maximizing life entail in this case? The teenager may have more life expectancy, but they may also be able to react faster and move out of the way better than the old lady. This would entail veering towards the teenager instead, potentially saving everyone and “maximizing life”. Given the relevance of the details of the situation to any resolution of the problem, depending on abstract representations such as those emerging from the trolley problem inherently limits our ability to think critically about design.
Finally, in spite of the fundamental limitations and potential for error, the uncertainties involved, and the unique nature of every situation, proponents still argue that we should favor self-driving cars as they are better than humans at handling these problems and will therefore save more lives than otherwise. But is this a valid argument? The answer to this question is again, no. This is because the lives lost due to algorithms that have systemic biases and limitations are different from lives lost due to human driver error. The behaviors of algorithms are decided a priori by those who have power over designing them, while the behaviors of human drivers are not pre-determined. In the previous example, older people would unfairly be targeted every time if “maximizing life” (understood in terms of one’s age) is taken as the core principle. This is especially egregious as such older people may have had no say in even deciding the principle of “maximizing life” in the first place.
We must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day.
Reframing the Problem
What if we decide that moral algorithms are entirely unacceptable? What if we choose to reject them no matter what? A genuine concern with the many lives lost in car accidents now and in the future—a concern that transcends false binary framings of technology enthusiasts’ opportunity-centered approach—and the liability considerations of the automotive industry, could serve as a starting point to rethink mobility as it connects to the design of our cities and the future of our communities. Indeed, a serious consideration of the ethical issues raised by self-driving cars opens a space for design that is innovative, inclusive, and mindful of both immediate and broad consequences. We can gesture towards multiple possibilities for the kinds of radical rethinking of this design space such as by examining how our cities were to be transformed if self-driving cars were to inhabit it and imagining alternative transportation systems for more equitable and just cities.
For the former, we must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day. The decision to accept algorithmic morality cannot be simply made based on the number of lives potentially saved. How much of public funds and public spaces would have to be devoted to smart cars and to whose benefit? How would it impact the legal system, one that is already biased against bystanders, bikers, and small children (Jain 2004)? What would it be like to live in a city where at every moment you might be the target of an accident decided solely by the preprogrammed logic of an algorithm? Would it be safer to travel with a stroller at all times to assure safety?
How did we get to be so reliant on cars for our daily transportation? How did the many deaths as a result of car accidents become normalized? While the forces of the historic trajectories that led to car-centric cities and lifestyles are strong, we are not bound to follow them. To be sure, there are other models and trajectories that we can draw upon. Examples include the Netherlands’ bike-friendly laws, public policy that resulted in a reliable and safe biking infrastructure, or the more recent Swedish Vision Zero with the ultimate target of redesigning urban infrastructures as to entirely eliminate deaths or serious injuries by cars. We might indeed think of the introduction of self-driving cars as an occasion for a radical rethinking of mobility that challenges and reorients dominant car-centric visions. For this, however, we need to engage multiple disciplinary and social and historical perspectives and to embrace more nuanced framings of the problems of mobility.