• Skip to content

Ethical Imagination

Ethical Imagination & Technoscientific Practices

  • Home
  • Cases
  • Glossary
  • Bibliography
  • About

Contemporary

Sep 01 2022

Self-Driving Cars and Algorithmic Morality

Self-Driving Cars and Algorithmic Morality

The discourse around self-driving cars has been dominated by an emphasis on their potential to reduce the number of accidents. At the same time, proponents acknowledge that self-driving cars would inevitably be involved in fatal accidents where moral algorithms would decide the fate of those involved. This is a necessary trade-off, proponents suggest, in order to reap the benefits of this new technology. Yet, if we look closely, we find underlying assumptions that obscure important ethical and political nuances and undermine the significance of human life and living.

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

The ethical problems associated with self-driving are commonly framed in terms of the Trolley Problem (and its variations). This problem puts forth a hypothetical situation in which you find yourself driving a trolley at 60 miles an hour toward five workers on a track. You step on the brakes only to find that they are broken. Staying on this path, it is inevitable that you will kill all the five workers. At this moment, you notice a sidetrack where only one worker is standing. By veering off to this sidetrack, you will kill only one worker. What will you do? What if instead you could push a fat man in the path of the trolley to stop it? What would you do then? While the goal of such hypothetical scenarios is to help analyze pressing ethical issues ranging from abortion to insurance policies, the scenarios themselves are improbable or very unlikely. In the context of designing self- driving cars, however, these scenarios are often taken very literally, with the trolley being seen as a stand-in for a self-driving car whose algorithm must make an ethical decision in such a situation.

It is therefore, important both to take the challenges posed by the Trolley problem seriously and to engage the apparent contradiction in peoples’ responses to them, questioning trolley experiments as templates for algorithmic morality. More specifically, we need to question three dominant assumptions:

  • That the principles upheld by experimental ethics approaches such as the Trolley Problem sufficiently resemble real ethical situations
  • That identifying or agreeing on a set of principles is sufficient for creating moral algorithms that adequately control the behavior of self-driving cars
  • That we can accept algorithmic morality in the context of self-driving cars, given that by adopting such care, in theory, we save more lives than we lose.


Hypothetical situations suggested by the Trolley Problem are being taken literally in the case of self-driving cars

Concerns and Considerations

First, is it reasonable to assume that principles upheld by experimental ethics are sufficient to resolve those same ethical situations if faced in real life? No. To illustrate why not, we can highlight the improbable and binary framing of these scenarios, noting how distant they are from actual life situations. As a result, the values they uphold may or may not sufficiently serve lived ethical situations. Consider, for example, the fact that we do not learn any details about the five workers in the original framing of the trolley problem: Who are they? Are they young or old? How are they positioned in relationship to the trolley? Are they capable of seeing, hearing, and reacting to the trolley as it approaches them? Would I, the trolley driver, be unfairly targeting the one worker, given that he is just a bystander in the situation until I decide to target him, thus giving him a lot less time to reflect and react? These and other similar questions point to the uncertain, complex, and living nature of the situation that I, the driver of the trolley, could plausibly be facing. As a result, while it may seem that the utilitarian principle of “saving most lives” is applicable when presented with the simplified version of the scenario, this principle may or may not serve in an actual lived situation.

Is identifying or agreeing on “a” set of principles for creating moral algorithms that adequately control the behavior of self-driving cars” sufficient? Also no. Solving ethical problems in not a matter of deciding which principles should be upheld, since every principle is rendered differently based on the situation. Resolving or addressing ethical situations therefore involves deliberating on what the situation is and what the the pertinent values entail in action. Say, for example , that we agree on the principle of maximizing life (defined in terms of life expectancy). Now imagine the self-driving car is about to have an accident and is faced with two choices: an older adult on one side and a young child on the other. Here, interpreting the principle in terms of life expectancy would result in deciding to save the child’s life. In another scenario, what if there is a teenager on one side and an older woman pushing a stroller on the other? What does maximizing life entail in this case? The teenager may have more life expectancy, but they may also be able to react faster and move out of the way better than the old lady. This would entail veering towards the teenager instead, potentially saving everyone and “maximizing life”. Given the relevance of the details of the situation to any resolution of the problem, depending on abstract representations such as those emerging from the trolley problem inherently limits our ability to think critically about design.

Finally, in spite of the fundamental limitations and potential for error, the uncertainties involved, and the unique nature of every situation, proponents still argue that we should favor self-driving cars as they are better than humans at handling these problems and will therefore save more lives than otherwise. But is this a valid argument? The answer to this question is again, no. This is because the lives lost due to algorithms that have systemic biases and limitations are different from lives lost due to human driver error. The behaviors of algorithms are decided a priori by those who have power over designing them, while the behaviors of human drivers are not pre-determined. In the previous example, older people would unfairly be targeted every time if “maximizing life” (understood in terms of one’s age) is taken as the core principle. This is especially egregious as such older people may have had no say in even deciding the principle of “maximizing life” in the first place.

We must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day.

Reframing the Problem

What if we decide that moral algorithms are entirely unacceptable? What if we choose to reject them no matter what? A genuine concern with the many lives lost in car accidents now and in the future—a concern that transcends false binary framings of technology enthusiasts’ opportunity-centered approach—and the liability considerations of the automotive industry, could serve as a starting point to rethink mobility as it connects to the design of our cities and the future of our communities. Indeed, a serious consideration of the ethical issues raised by self-driving cars opens a space for design that is innovative, inclusive, and mindful of both immediate and broad consequences. We can gesture towards multiple possibilities for the kinds of radical rethinking of this design space such as by examining how our cities were to be transformed if self-driving cars were to inhabit it and imagining alternative transportation systems for more equitable and just cities.

For the former, we must consider both tangible and intangible effects, such as how this new technology might change the character of our cities and the quality of movement day to day. The decision to accept algorithmic morality cannot be simply made based on the number of lives potentially saved. How much of public funds and public spaces would have to be devoted to smart cars and to whose benefit? How would it impact the legal system, one that is already biased against bystanders, bikers, and small children (Jain 2004)? What would it be like to live in a city where at every moment you might be the target of an accident decided solely by the preprogrammed logic of an algorithm? Would it be safer to travel with a stroller at all times to assure safety?

How did we get to be so reliant on cars for our daily transportation? How did the many deaths as a result of car accidents become normalized? While the forces of the historic trajectories that led to car-centric cities and lifestyles are strong, we are not bound to follow them. To be sure, there are other models and trajectories that we can draw upon. Examples include the Netherlands’ bike-friendly laws, public policy that resulted in a reliable and safe biking infrastructure, or the more recent Swedish Vision Zero with the ultimate target of redesigning urban infrastructures as to entirely eliminate deaths or serious injuries by cars. We might indeed think of the introduction of self-driving cars as an occasion for a radical rethinking of mobility that challenges and reorients dominant car-centric visions. For this, however, we need to engage multiple disciplinary and social and historical perspectives and to embrace more nuanced framings of the problems of mobility.

Written by aanupam3 · Categorized: Contemporary

May 11 2022

Facial Recognition and Policing

 

 

Facial Recognition and Policing

Facial Recognition Technologies (FRT) are increasingly being used in policing, but have been plagued with problems due to biases in data. While reducing such bias is important , taking a step back and asking some fundamental questions can tackle some of the underlying assumptions behind this goal: Is it even necessary to use FRT? Can an ideal FRT even be built?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

Facial Recognition Technology (FRT) aims to automate the identification of individuals in images and videos by using machine learning to read their images/videos and compare them to a large database. It is increasingly being used for law enforcement activities such as for identifying and tracking potential criminal suspects or for confirming people’s identity in digital and physical spaces. However, a key issue of FRT is that it has been significantly less accurate for people of color, particularly Black Women, as compared to White men. Coupled with racial injustices in the policing system, FRT can lead to wrongful arrests and misidentifications prejudiced against Black people. For example, in 2020 FRT was used to wrongfully arrest Robert Williams, a Black man, for stealing merchandise from a store in Detroit based on an examination of grainy surveillance footage. While the case was soon dismissed by a local court due to a lack of sufficient evidence, charges of arrest could still persist on Williams’ record, inhibiting his employment prospects. 

Part of the reason why FRT has not worked well for people of all demographics has been due to biases in the data used to train the machine learning algorithms underlying it. Predatory policing of black people has led to their disproportionate incarceration and subsequently elevated the proportion of images such as mugshots that implicate Black people as criminals. When this biased image data is fed into the algorithms underlying FRT, the technology becomes more likely to wrongfully implicate a Black person drawing on the fact that Black people are disproportionately incarcerated. Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias, i.e., to make them more accurate at identifying all kinds of people—“How can we reduce bias in our facial recognition algorithms?”

 

 


Given that the technology is being increasingly adopted for law enforcement, a key focus of the research into FRT has been on reducing their bias

 

 

Concerns and Considerations

While it is certainly important for technologies to not discriminate over race, sex, or other social differences, there are two key concerns with this framing. First, it assumes that an ideal FRT—one that could always correctly identify individuals in any image/video—can be unproblematically built. Second, it uncritically accepts that an ideal FRT is necessary and good for public safety. However, there are multiple issues with this assumption.

First, FRT needs to be trained on large databases of images and videos of all people. Building such a database would require mass surveillance of the population regardless of their consent, as every individual’s facial data would be needed to ensure accuracy. This is problematic as it would require invading people’s privacy and overriding their autonomy. On the other hand, if the process of building such databases required informed consent, then it is entirely likely that people can opt-out of it and therefore prevent FRT from recognizing them if needed in the future, thereby rendering the technology less effective. 

Second, such a framing accepts that facial recognition is worth developing for the sake of public safety, as it can enable stronger law enforcement. This approach, however, fails to tackle the root causes underlying “illegal” activity such as systemic injustice and poverty, or the causes for the disproportionate incarceration of Black people, such as overpolicing. Further, as history has shown, those who intend to commit a crime will find new ways of doing it that thwart advancements in FRT, such as by wearing masks. Consequently, FRT only adds to the arms race between the police and “criminals” while contributing little towards eradicating the underlying societal problems.

 

 


Instead of asking “how can we reduce the biases of FRT?” why not try to tackle systemic injustice in policing? For example, “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary arrests and violence?

 

 

Reframing the Problem

Drawing on the above concerns and considerations, there are several questions that can be asked to reframe the situation. Instead of asking “how can we reduce the biases of FRT?” one can ask “how can we use the inherent biases in FRT as a lens to examine underlying societal problems?” This positions biases in data as a strength rather than a weakness and leads to additional questions: “If there are disproportionately more Black people in the database, why is that the case?” and “Can examining how the database is constructed reveal problematic biases in society and law enforcement?”

Other directions can also be pursued that challenge not only FRT but also inherent problematic beliefs in law enforcement: “Can we develop technologies that ease tensions between the police and suspects to prevent unnecessary violence?” or “Can there be a more local, communal, and democratic approach to reducing crime in a neighborhood that reduces the need for overpolicing?” 

Examining how FRT is situated in a broader societal context and reframing the problems accordingly as suggested above is necessary for ethical practice. It helps identify problematic assumptions upon which existing questions lie, and sets a more robust ethical base upon which to pursue new ideas and designs. The intent is not to inhibit the growth of technologies, but rather to make the effort worthwhile in a manner that advances both the designer’s goals as well as democratic values in society.

Written by aanupam3 · Categorized: Contemporary, Featured

May 08 2022

Student Monitoring and Remote Tests

 

 

Student Monitoring and Remote Tests

Monitoring students in online tests may be necessary to stop them from cheating, but it can also affect their mental health. But instead of falling into this dilemma, why not focus on a broader question: how can we change testing to actually help students rather than to simply assess them?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

As online and remote learning options become more viable, so does the need to remotely assess students on their learning. However, exams and tests given remotely during the early stages of the COVID-19 pandemic saw a significant rise in cheating by students. Students employed a variety of means such as googling the answers, asking others in online forums and discussion boards, and hiding physical notes around them. This increase in cheating triggered a response by educational institutes in the form of proctoring and tracking tools which required students and screens to be continuously monitored as they gave their tests (Subin, 2021). These tools have been purported to catch several instances of cheating that would otherwise have gone unnoticed (Harwell, 2020).

However, such invasive monitoring can also be detrimental to students’ mental health and intrude on their privacy. Being continually watched—especially one may be wrongfully accused of cheating due to “unusual” eye and head movements—has heightened stress and anxiety in many students. Further, the extensive data collected by such tools could also be hacked, exposing students’ private data (Harwell, 2020). This situation raises an important question: “Is it worth invading students’ privacy to prevent cheating in remote tests?”

 


Invading student’s privacy to prevent them from cheating can be detrimental to their mental health

 

 

Concerns and Considerations

There are two key concerns with this framing: its dichotomization of the issue, and is its assumption that cheating is symptomatic of a problem with the student, rather than with the culture of assessment and education. 

First, this framing of the problem makes it seem that the situation requires a trade-off between allowing cheating and invading privacy. Such dichotomization is problematic because it foregoes other possibilities, such as designing assessments where “cheating” has no meaning. For example, consider project-based assessments where students have to identify problems in their local environments and design approaches to them using the subject-matter learned in class. Such assignments cannot be “cheated” on as there is no “correct” answer to cheat for. The problem itself is ambiguous and evolves over time. Nor is it “cheating” to ask for help from others (parents, friends) because learning to ask for support and working with others is often necessary to resolve local problems. Such an approach is also advantageous as it encourages students to learn how to develop problems and apply what they have learned to them, which is more aligned with professional practice. 

Second, this framing places the blame for cheating squarely on the students, ignoring how the design of the educational/assessment system itself can contribute to cheating, especially for struggling students who do not have adequate support. One of the primary reasons students cheat on tests is to avoid failure. This is partly the result of a flawed assessment culture that punishes failure on tests rather than using it as an opportunity for learning and growth. For example, failing a test often means repeating the class in its entirety, rather than getting support on those specific areas that one struggles with.

 

 


Instead of tests, the class as a whole could aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating”

 

 

 

Reframing the Problem

Drawing on these concerns, there are several possible avenues for reframing the above problem. As discussed above, one could ask, “how can we design better ways of assessing students to support learning and growth?” This would foster exploration of assignments that use evaluation as an intermediary step towards learning, rather than as a way of categorizing students by skill or ability. 

One could also question the inherent mistrust placed in the students implied in the original framing and instead focus on community building: “how can we develop a community of learning where teachers and students learn from each other?” Such a question explores the possibility of the class functioning as a team rather than as an aggregate of individuals. For example, the class as a whole can aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating” as individual students are not judged for their skill, but rather on how they hone those skills to contribute towards resolving the class problem and are willing to support others.

Questions can also be asked of the broader educational/assessment culture: “how can we re-design educational environments to better support struggling students?” Such an approach shifts the conversation from catching and punishing struggling students who see no option but to cheat, to identifying and supporting struggling students early on without discrediting or mistrusting them. For example, shifting from a few high stakes tests to regular low stakes assignments that are iterative in nature can allow students to revise and learn from their mistakes. Particularly, it gives struggling multiple chances to improve their grade without being punished for not doing well.

Written by iacosta6 · Categorized: Contemporary, Featured

May 06 2022

Content Moderation

Content Moderation

Content moderation on social media is a complicated issue. We must mitigate abuse and toxicity on online platforms in order to create a space that enables inspiring, or even just cordial, conversation and discussion, but too much mitigation can lead to one infringing on one’s right to freedom of speech. Instead of asking how should we moderate content, perhaps the question should be, how can we foster healthy online communities and environments?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

Free speech is essential for a functioning democracy. As social media platforms such as Twitter and Facebook increasingly function as centers of public discourse, it is necessary for them to preserve free speech to ensure fair and democratic discussion of key societal issues. Control of free speech, especially by third parties (such as experts, governments, or the platforms themselves), can be problematic because of at least two key reasons. First, it gives them significant power to shape public discourse. Such a power can be abused to hegemonize ideas and stifle creative or critical thinking. For example, a government controlled social media can restrict discussions that criticize social policies. Second, restricted free speech prevents new ideas from being tested through open debate in “the marketplace of ideas”. An open forum allows problematic ideas to surface, be discussed, and appropriately critiqued. This can help filter out problematic ideas organically from public discourse. A closed forum on the other hand will lead to such discussions happening elsewhere, often in self-reinforcing social circles where the idea can go unchecked. For example, when white supremacist beliefs are artificially suppressed from open discussion, they can fester and go unchallenged in underground social groups, and more problematically, be amplified.

However, moderating speech on social media is also necessary—particularly “free” speech that inhibits other speech or harms people. Notably, this can come in two forms: abusive speech and mis/disinformation. First, abusive behavior on socila media such as threats, harassment, bullying, and hate speech can deter many people from participation, effectively silencing them and suppressing their ideas on the platform. For example, doxxing—where a user threatens another by either posting their address or by claiming to know where they live—can blackmail people into submission. Similarly, racist, sexist, and xenophobic threats of violence can drive targeted social groups away from participation. Second, mis/disinformation can also effectively curtail free speech if it harm people or corrupt their ideas. For example, distortions about the effectiveness of vaccines or the effects of climate change can and have endangered real people. Given that such ideas have already been invalidated by scientific research, it can be unwise to allow them to spread independent of attachment to scientific critique.

Now, it is difficult to always judge what constitutes “abusive” behavior or “mis/disinformation” given their highly contextual nature. What is abusive for one person in a specific situation may not be abusive for another in a different situation. Further, information is always evolving. What is considered false today may be proven true tomorrow, and vice-versa. Moderation can therefore preemptively curtail free-speech. Conversely, however, unmoderated speech can also be used to subdue the free speech of others or harm them, as discussed above. This raises a paradoxical question, can social media platforms preserve free speech while also moderating content?


Freedom of speech can be used to curtail the speech of others through threats of violence

Concerns and Considerations

There are two primary concerns with the above framing. First, it understands “free speech” as unconstrained speech, i.e., freedom to say whatever one wants to say.  Second, it understands “moderation” primarily in terms of removing content.

First, if one of the key goals of free speech is to promote open discussion of ideas, then “free speech”  cannot be understood as unconstrained or unrestricted speech where one can say anything they please. This is because for any open discussion to be meaningful, it must follow certain social and cultural norms of civility, respect, and openness. Such norms are visible in other places of open discourse such as town hall meetings, event gatherings, and even online forums where anyone who does not follow such norms can be removed or blocked from the discussion. Importantly, almost any idea can still be shared in such an environment, as long as it is accompanied by an appropriate candor that invites others to discuss rather than abuses or threatens them. Anyone who harasses others under the guise of “free speech” in such an open forum is not usually tolerated.  Consequently, free speech—when understood in the context of an open democratic forum for ideas—is not oppositional to moderation that seeks to help speech remain civil and respectful.

Second, moderation can be understood not only as the removal of content, but also a means of tempering, contextualizing, and integrating it meaningfully into discourse. While some content such as threats of violence and harassment must be removed, others such as mis/disinformation can be checked by strategies like warning labels and links to verified sources. Many social media platforms have already begun implementing such strategies for the COVID-19 vaccine where they flag misinformation and link it to verified sources such as the CDC (in USA) or the WHO. This situates misinformation within the debate rather than excluding it, allowing people to examine how and why the content is treated as misinformation.  Such moderation strategies can promote more informed discussion about misinformation, rather than allowing it to grow unchecked in other avenues.


The problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse rather than maximize engagement

Reframing the Problem

One of the key ways to reframe the problem is to challenge the prevailing framework of the social networks themselves. Most social media platforms were not designed for nuanced open public discourse. Rather, they were designed to maximize engagement. Given that rhetoric involving inflammatory, offensive, divisive, and abusive content often gathers significant engagement, it is no surprise that social media platforms magnify this kind of content over nuanced, respectful, and sustained discussion. Consequently, the problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse and cultivate harmonious relationships across sociopolitical difference. This raises one possible reframing of the situation: “how can social media be designed to favor more democratic forms of discourse while also maximizing engagement?”

Written by iacosta6 · Categorized: Contemporary

May 02 2022

Crowdsourced Apps and Accessibility

 

 

Apps and Accessibility

People with chronic illnesses such as sensory processing anomalies and environmental sensitivities have traditionally been left out of the disability discourse and accessible design practices in the context of the built environment. What can we consider when designing for this space?

 
DOminant Framing
Concerns and Limitations
Reframing
 

Dominant Framing of the Problem

Accessibility is often a key concern in the design of public spaces. However, accessibility concerns often do not consider people with chronic illnesses such as Lyme disease, sensory processing anomalies, and high environmental sensitivity. This makes it difficult for them to find public spaces that accommodate their specific needs. For example, people with a sensory processing disorder may find it difficult to find a public place with low levels of stimulation like noise, or those with inflammatory diseases might not always be able to find appropriate restrooms. 

Mobile apps can offer some support to help people with chronic illnesses find accommodative public spaces. Using a crowdsourcing approach together with machine learning algorithms that draw from existing databases like Google maps and Yelp, an app can collect, filter, and display information about accommodations based on the unique needs or preferences of a wide variety of chronic illnesses. For example, it can contain an interactive map with pinned destinations and a sidebar with a list of spatial affordances that serve as filters such as “quiet space”, or “clean restrooms.” Given the wide variety of chronic illnesses, can crowdsourced mobile apps improve accessibility for people with chronic illness in public spaces?

 

 

People with a sensory processing disorder may find it difficult to find a public place with low levels of stimulation like noise, or those with inflammatory diseases might not always be able to find appropriate restrooms.

 
 

Concerns and Considerations

There are two primary concerns with this framing of the problem-situation: it assumes that the experiences of people in a place can be captured in a set of clearly defined categories, and it places the burden of accessibility on those who are chronically ill.

First, the filters on the app require one to classify places on the basis of clearly defined categories: such as “clean restrooms”, “quiet place”, or “options for lying down”. However, such categories can be difficult to trust as they can mean different things to different people—an area that is quiet for some may still be loud for others. It is inherently difficult to quantify or parameterize qualitative experiences. The same disease can manifest in different ways in different people, resulting in several unique experiences. For example, different people with Lyme disease are affected by different kinds of external stimuli. Further, places are also dynamic—they change over time. Areas that have low stimulation or have plenty of sitting options may not stay that way in the near future. All of these factors make it difficult, if not impossible, to develop meaningful categories about places for chronically ill people.

Second, the app, by virtue of being crowdsourced, depends on inputs by those who are chronically ill or knowledgeable about chronic illnesses. It makes it their responsibility to regularly identify, share, verify, and amend information about the environments of public places. This is problematic as incorrect information about places can go unchecked, which can endanger chronically ill people who use the app to decide where to go.

 

It is inherently difficult to quantify qualitative experiences. The same disease can manifest in different ways in different people, resulting in several unique experiences.

 
 

Reframing of the Problem

Given the above concerns, there are multiple ways to reframe the problem-situation. One approach would be support research and action into environmental change: “How can public spaces adapt to the varying needs of chronically ill people?” This would invite exploration into innovative techno-spatial designs. For example, can spaces be designed to have adjustable levels of audio/olfactory/visual-isolation from their surrounding areas to accommodate those with sensory processing disorders? Can parks provide portable defibrillators within their premises so that those with a chronic risk of heart attack can be accommodated?

Another approach would be to focus on communal/cultural change: “can we foster the development of more supportive communities and systems of care?” Given that chronic illnesses are so diverse and dynamic, it can be difficult to establish and maintain public spaces that are considerate of all possible needs. Instead, focusing on developing a broader community of care that helps people with chronic illnesses to be accommodated can be more sustainable in the long-term. For example, a local social network that connects those with chronic illnesses living in the same area with each other can help in the development of a community built on mutual care and support, which can extend to include the families and friends of chronically ill people with each other. This way, when one decides to visit a public space, they can reach out to their community members to help them be accomodated in that space, and can do the same for others.

Written by aanupam3 · Categorized: Contemporary

  • Page 1
  • Page 2
  • Next Page »

Copyright © 2025 · Altitude Pro on Genesis Framework · WordPress · Log in

Sponsored by Cisco