Content Moderation
Content moderation on social media is a complicated issue. We must mitigate abuse and toxicity on online platforms in order to create a space that enables inspiring, or even just cordial, conversation and discussion, but too much mitigation can lead to one infringing on one’s right to freedom of speech. Instead of asking how should we moderate content, perhaps the question should be, how can we foster healthy online communities and environments?
Dominant Framing of the Problem
Free speech is essential for a functioning democracy. As social media platforms such as Twitter and Facebook increasingly function as centers of public discourse, it is necessary for them to preserve free speech to ensure fair and democratic discussion of key societal issues. Control of free speech, especially by third parties (such as experts, governments, or the platforms themselves), can be problematic because of at least two key reasons. First, it gives them significant power to shape public discourse. Such a power can be abused to hegemonize ideas and stifle creative or critical thinking. For example, a government controlled social media can restrict discussions that criticize social policies. Second, restricted free speech prevents new ideas from being tested through open debate in “the marketplace of ideas”. An open forum allows problematic ideas to surface, be discussed, and appropriately critiqued. This can help filter out problematic ideas organically from public discourse. A closed forum on the other hand will lead to such discussions happening elsewhere, often in self-reinforcing social circles where the idea can go unchecked. For example, when white supremacist beliefs are artificially suppressed from open discussion, they can fester and go unchallenged in underground social groups, and more problematically, be amplified.
However, moderating speech on social media is also necessary—particularly “free” speech that inhibits other speech or harms people. Notably, this can come in two forms: abusive speech and mis/disinformation. First, abusive behavior on socila media such as threats, harassment, bullying, and hate speech can deter many people from participation, effectively silencing them and suppressing their ideas on the platform. For example, doxxing—where a user threatens another by either posting their address or by claiming to know where they live—can blackmail people into submission. Similarly, racist, sexist, and xenophobic threats of violence can drive targeted social groups away from participation. Second, mis/disinformation can also effectively curtail free speech if it harm people or corrupt their ideas. For example, distortions about the effectiveness of vaccines or the effects of climate change can and have endangered real people. Given that such ideas have already been invalidated by scientific research, it can be unwise to allow them to spread independent of attachment to scientific critique.
Now, it is difficult to always judge what constitutes “abusive” behavior or “mis/disinformation” given their highly contextual nature. What is abusive for one person in a specific situation may not be abusive for another in a different situation. Further, information is always evolving. What is considered false today may be proven true tomorrow, and vice-versa. Moderation can therefore preemptively curtail free-speech. Conversely, however, unmoderated speech can also be used to subdue the free speech of others or harm them, as discussed above. This raises a paradoxical question, can social media platforms preserve free speech while also moderating content?
Freedom of speech can be used to curtail the speech of others through threats of violence
Concerns and Considerations
There are two primary concerns with the above framing. First, it understands “free speech” as unconstrained speech, i.e., freedom to say whatever one wants to say. Second, it understands “moderation” primarily in terms of removing content.
First, if one of the key goals of free speech is to promote open discussion of ideas, then “free speech” cannot be understood as unconstrained or unrestricted speech where one can say anything they please. This is because for any open discussion to be meaningful, it must follow certain social and cultural norms of civility, respect, and openness. Such norms are visible in other places of open discourse such as town hall meetings, event gatherings, and even online forums where anyone who does not follow such norms can be removed or blocked from the discussion. Importantly, almost any idea can still be shared in such an environment, as long as it is accompanied by an appropriate candor that invites others to discuss rather than abuses or threatens them. Anyone who harasses others under the guise of “free speech” in such an open forum is not usually tolerated. Consequently, free speech—when understood in the context of an open democratic forum for ideas—is not oppositional to moderation that seeks to help speech remain civil and respectful.
Second, moderation can be understood not only as the removal of content, but also a means of tempering, contextualizing, and integrating it meaningfully into discourse. While some content such as threats of violence and harassment must be removed, others such as mis/disinformation can be checked by strategies like warning labels and links to verified sources. Many social media platforms have already begun implementing such strategies for the COVID-19 vaccine where they flag misinformation and link it to verified sources such as the CDC (in USA) or the WHO. This situates misinformation within the debate rather than excluding it, allowing people to examine how and why the content is treated as misinformation. Such moderation strategies can promote more informed discussion about misinformation, rather than allowing it to grow unchecked in other avenues.
The problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse rather than maximize engagement
Reframing the Problem
One of the key ways to reframe the problem is to challenge the prevailing framework of the social networks themselves. Most social media platforms were not designed for nuanced open public discourse. Rather, they were designed to maximize engagement. Given that rhetoric involving inflammatory, offensive, divisive, and abusive content often gathers significant engagement, it is no surprise that social media platforms magnify this kind of content over nuanced, respectful, and sustained discussion. Consequently, the problem of content moderation and free speech on social media platforms requires rethinking such platforms from the ground up to actively foster democratic discourse and cultivate harmonious relationships across sociopolitical difference. This raises one possible reframing of the situation: “how can social media be designed to favor more democratic forms of discourse while also maximizing engagement?”