• Skip to content

Ethical Imagination

Ethical Imagination & Technoscientific Practices

  • Home
  • Cases
  • Glossary
  • Bibliography
  • About

iacosta6

May 08 2022

Student Monitoring and Remote Tests

 

 

Student Monitoring and Remote Tests

Monitoring students in online tests may be necessary to stop them from cheating, but it can also affect their mental health. But instead of falling into this dilemma, why not focus on a broader question: how can we change testing to actually help students rather than to simply assess them?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

As online and remote learning options become more viable, so does the need to remotely assess students on their learning. However, exams and tests given remotely during the early stages of the COVID-19 pandemic saw a significant rise in cheating by students. Students employed a variety of means such as googling the answers, asking others in online forums and discussion boards, and hiding physical notes around them. This increase in cheating triggered a response by educational institutes in the form of proctoring and tracking tools which required students and screens to be continuously monitored as they gave their tests (Subin, 2021). These tools have been purported to catch several instances of cheating that would otherwise have gone unnoticed (Harwell, 2020).

However, such invasive monitoring can also be detrimental to students’ mental health and intrude on their privacy. Being continually watched—especially one may be wrongfully accused of cheating due to “unusual” eye and head movements—has heightened stress and anxiety in many students. Further, the extensive data collected by such tools could also be hacked, exposing students’ private data (Harwell, 2020). This situation raises an important question: “Is it worth invading students’ privacy to prevent cheating in remote tests?”

 


Invading student’s privacy to prevent them from cheating can be detrimental to their mental health

 

 

Concerns and Considerations

There are two key concerns with this framing: its dichotomization of the issue, and is its assumption that cheating is symptomatic of a problem with the student, rather than with the culture of assessment and education. 

First, this framing of the problem makes it seem that the situation requires a trade-off between allowing cheating and invading privacy. Such dichotomization is problematic because it foregoes other possibilities, such as designing assessments where “cheating” has no meaning. For example, consider project-based assessments where students have to identify problems in their local environments and design approaches to them using the subject-matter learned in class. Such assignments cannot be “cheated” on as there is no “correct” answer to cheat for. The problem itself is ambiguous and evolves over time. Nor is it “cheating” to ask for help from others (parents, friends) because learning to ask for support and working with others is often necessary to resolve local problems. Such an approach is also advantageous as it encourages students to learn how to develop problems and apply what they have learned to them, which is more aligned with professional practice. 

Second, this framing places the blame for cheating squarely on the students, ignoring how the design of the educational/assessment system itself can contribute to cheating, especially for struggling students who do not have adequate support. One of the primary reasons students cheat on tests is to avoid failure. This is partly the result of a flawed assessment culture that punishes failure on tests rather than using it as an opportunity for learning and growth. For example, failing a test often means repeating the class in its entirety, rather than getting support on those specific areas that one struggles with.

 

 


Instead of tests, the class as a whole could aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating”

 

 

 

Reframing the Problem

Drawing on these concerns, there are several possible avenues for reframing the above problem. As discussed above, one could ask, “how can we design better ways of assessing students to support learning and growth?” This would foster exploration of assignments that use evaluation as an intermediary step towards learning, rather than as a way of categorizing students by skill or ability. 

One could also question the inherent mistrust placed in the students implied in the original framing and instead focus on community building: “how can we develop a community of learning where teachers and students learn from each other?” Such a question explores the possibility of the class functioning as a team rather than as an aggregate of individuals. For example, the class as a whole can aim to solve a real community problem, such as local water or air pollution, with the teacher as their guide. This would leave little room for “cheating” as individual students are not judged for their skill, but rather on how they hone those skills to contribute towards resolving the class problem and are willing to support others.

Questions can also be asked of the broader educational/assessment culture: “how can we re-design educational environments to better support struggling students?” Such an approach shifts the conversation from catching and punishing struggling students who see no option but to cheat, to identifying and supporting struggling students early on without discrediting or mistrusting them. For example, shifting from a few high stakes tests to regular low stakes assignments that are iterative in nature can allow students to revise and learn from their mistakes. Particularly, it gives struggling multiple chances to improve their grade without being punished for not doing well.

Written by iacosta6 · Categorized: Contemporary, Featured

May 06 2022

Content Moderation

Content Moderation

Content moderation on social media is a complicated issue. We must mitigate abuse and toxicity on online platforms in order to create a space that enables inspiring, or even just cordial, conversation and discussion, but too much mitigation can lead to one infringing on one’s right to freedom of speech. Instead of asking how should we moderate content, perhaps the question should be, how can we foster healthy online communities and environments?

DOminant Framing
Concerns and Limitations
Reframing

Dominant Framing of the Problem

Free speech is essential for a functioning democracy. As social media platforms such as Twitter and Facebook increasingly function as centers of public discourse, it is necessary for them to preserve free speech to ensure fair and democratic discussion of key societal issues. Control of free speech, especially by third parties (such as experts, governments, or the platforms themselves), can be problematic because of at least two key reasons. First, it gives them significant power to shape public discourse. Such a power can be abused to hegemonize ideas and stifle creative or critical thinking. For example, a government controlled social media can restrict discussions that criticize social policies. Second, restricted free speech prevents new ideas from being tested through open debate in “the marketplace of ideas”. An open forum allows problematic ideas to surface, be discussed, and appropriately critiqued. This can help filter out problematic ideas organically from public discourse. A closed forum on the other hand will lead to such discussions happening elsewhere, often in self-reinforcing social circles where the idea can go unchecked. For example, when white supremacist beliefs are artificially suppressed from open discussion, they can fester and go unchallenged in underground social groups, and more problematically, be amplified.

However, moderating speech on social media is also necessary—particularly “free” speech that inhibits other speech or harms people. Notably, this can come in two forms: abusive speech and mis/disinformation. First, abusive behavior on socila media such as threats, harassment, bullying, and hate speech can deter many people from participation, effectively silencing them and suppressing their ideas on the platform. For example, doxxing—where a user threatens another by either posting their address or by claiming to know where they live—can blackmail people into submission. Similarly, racist, sexist, and xenophobic threats of violence can drive targeted social groups away from participation. Second, mis/disinformation can also effectively curtail free speech if it harm people or corrupt their ideas. For example, distortions about the effectiveness of vaccines or the effects of climate change can and have endangered real people. Given that such ideas have already been invalidated by scientific research, it can be unwise to allow them to spread independent of attachment to scientific critique.

Now, it is difficult to always judge what constitutes “abusive” behavior or “mis/disinformation” given their highly contextual nature. What is abusive for one person in a specific situation may not be abusive for another in a different situation. Further, information is always evolving. What is considered false today may be proven true tomorrow, and vice-versa. Moderation can therefore preemptively curtail free-speech. Conversely, however, unmoderated speech can also be used to subdue the free speech of others or harm them, as discussed above. This raises a paradoxical question, can social media platforms preserve free speech while also moderating content?


Freedom of speech can be used to curtail the speech of others through threats of violence

Concerns and Considerations

There are two primary concerns with the above framing. First, it understands “free speech” as unconstrained speech, i.e., freedom to say whatever one wants to say.  Second, it understands “moderation” primarily in terms of removing content.

First, if one of the key goals of free speech is to promote open discussion of ideas, then “free speech”  cannot be understood as unconstrained or unrestricted speech where one can say anything they please. This is because for any open discussion to be meaningful, it must follow certain social and cultural norms of civility, respect, and openness. Such norms are visible in other places of open discourse such as town hall meetings, event gatherings, and even online forums where anyone who does not follow such norms can be removed or blocked from the discussion. Importantly, almost any idea can still be shared in such an environment, as long as it is accompanied by an appropriate candor that invites others to discuss rather than abuses or threatens them. Anyone who harasses others under the guise of “free speech” in such an open forum is not usually tolerated.  Consequently, free speech—when understood in the context of an open democratic forum for ideas—is not oppositional to moderation that seeks to help speech remain civil and respectful.

Second, moderation can be understood not only as the removal of content, but also a means of tempering, contextualizing, and integrating it meaningfully into discourse. While some content such as threats of violence and harassment must be removed, others such as mis/disinformation can be checked by strategies like warning labels and links to verified sources. Many social media platforms have already begun implementing such strategies for the COVID-19 vaccine where they flag misinformation and link it to verified sources such as the CDC (in USA) or the WHO. This situates misinformation within the debate rather than excluding it, allowing people to examine how and why the content is treated as misinformation.  Such moderation strategies can promote more informed discussion about misinformation, rather than allowing it to grow unchecked in other avenues.


The problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse rather than maximize engagement

Reframing the Problem

One of the key ways to reframe the problem is to challenge the prevailing framework of the social networks themselves. Most social media platforms were not designed for nuanced open public discourse. Rather, they were designed to maximize engagement. Given that rhetoric involving inflammatory, offensive, divisive, and abusive content often gathers significant engagement, it is no surprise that social media platforms magnify this kind of content over nuanced, respectful, and sustained discussion. Consequently, the problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse and cultivate harmonious relationships across sociopolitical difference. This raises one possible reframing of the situation: “how can social media be designed to favor more democratic forms of discourse while also maximizing engagement?”

Written by iacosta6 · Categorized: Contemporary

Copyright © 2025 · Altitude Pro on Genesis Framework · WordPress · Log in

Sponsored by Cisco