https://www.nature.com/articles/s41599-020-00550-7
Luke Munn
Published: 30 July 2020
Hate speech and toxic communication online is on the rise. Responses to
this issue tend to offer technical (automated) or non-technical (human
content moderation) solutions, or see hate speech as a natural product
of hateful people. In contrast, this article begins by recognizing
platforms as designed environments that support particular practices
while discouraging others. In what ways might these design architectures
be contributing to polarizing, impulsive, or antagonistic behaviors?
Two platforms are examined: Facebook and YouTube. Based on engagement,
Facebook’s Feed drives views but also privileges incendiary content,
setting up a stimulus–response loop that promotes outrage expression.
YouTube’s recommendation system is a key interface for content
consumption, yet this same design has been criticized for leading users
towards more extreme content. Across both platforms, design is central
and influential, proving to be a productive lens for understanding toxic
communication.
•••••
Toxic communication is not just a nuisance or a nasty byproduct of online environments, but has more fundamental implications for human rights. “Online hate is no less harmful because it is online”, stressed a recent U.N. report (Kaye, 2019): “To the contrary, online hate, with the speed and reach of its dissemination, can incite grave offline harm and nearly always aims to silence others”. Hate forms a broad spectrum with extremist ideologies at one end. Online environments allow users to migrate smoothly along this spectrum, forming a kind of pipeline for radicalization (O’Callaghan et al., 2015; Munn, 2019). In this respect, the hate-based violence of the last few years is not random or anomalous, but a logical result of individuals who have spent years inhabiting hate-filled spaces where racist, sexist, and anti-Semitic views were normalized.
•••••
The [Facebook] Feed is designed according to a particular logic. Since 2009, stories are not sorted chronologically, where updates from friends would simply be listed in reverse order, with the most recent appearing first (Wallaroo Media, 2019). While this change induced a degree of backlash from users, the chronology itself proved to be overwhelming, especially with the hundreds of friends that each user has. “If you have 1500 or 3000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up”, explains analyst Benedict Evans (2018), “which can only be 10% or 20% of what’s actually there”. Instead, the Feed is driven by Engagement. In this design, Facebook weighs dozens of factors, from who posted the content to their frequency of posts and the average time spent on this piece of content. Posts with higher engagement scores are included and prioritized; posts with lower scores are buried or excluded altogether (see Fig. 3).
•••••
The problem with such sorting, of course, is that incendiary, polarizing posts consistently achieve high engagement (Levy, 2020, p. 627). This content is meant to draw engagement, to provoke a reaction. Indeed, in 2018 an internal research team at Facebook reported precisely this finding: by design it was feeding people “more and more divisive content in an effort to gain user attention and increase time on the platform” (Horwitz and Seetharaman, 2020). However, Facebook management ignored these findings and shelved the research.
This divisive material often has a strong moral charge. It takes a controversial topic and establishes two sharply opposed camps, championing one group while condemning the other. These are the headlines and imagery that leap out at a user as they scroll past, forcing them to come to a halt. This offensive material hits a nerve, inducing a feeling of disgust or outrage. “Emotional reactions like outrage are strong indicators of engagement”, observes designer and technologist Tobias Rose-Stockwell (2018), “this kind of divisive content will be shown first, because it captures more attention than other types of content”. While speculative, perhaps sharing this content is a way to offload these feelings, to remove their burden on us individually by spreading them across our social network and gaining some sympathy or solidarity.
•••••
At its worst, then, Facebook’s Feed stimulates the user with outrage-inducing content while also enabling its seamless sharing, allowing such content to rapidly proliferate across the network. In increasing the prevalence of such content and making it easier to share, it becomes normalized. Outrage retains its ability to provoke engagement, but in many ways becomes an established aspect of the environment. For neuroscientist Molly Crockett, this is one of the keys to understanding the rise of hate speech online. Crockett (2017, p. 770) stresses that “when outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression”. Design, in this sense, works to reduce the barrier to outrage expression. Sharing a divisive post to an audience of hundreds or thousands is just a click away.
•••••