Individual Submission Summary
Share...

Direct link:

Effects of Online Intolerance on Targets and Bystanders: A Four-Country Study

Sun, September 8, 10:00 to 11:30am, Marriott Philadelphia Downtown, 408

Abstract

Research on online political discourse has long been concerned with the pervasiveness of incivility across various digital arenas. Still, most of this research has focused on rude, vulgar, or offensive discourse, which is not necessarily harmful to discussion participants or bystanders (Rossini, 2022). We shift our focus to intolerant online discourse—content that is hateful, threatening, discriminatory, or harassing. In this study, we advance the understanding of the effects of exposure to intolerant discourse in three important ways. First, we focus on a wide range of reactions to intolerant speech, such as blocking, or reporting the authors, reducing one’s activity on social media, such as avoiding certain discussion topics, posting less frequently, or even deleting their profile, thus estimating the harm that intolerant speech causes. Secondly, we assess whether the effects of intolerant speech differ between targets and bystanders of intolerance. Thirdly, we compare the effects of intolerant discourse across four diverse democracies—Brazil, Germany, the UK, and the US—which enables us to more robustly estimate these relationships.
Intolerant discourse is a serious threat to online political expression: according to the Pew Research Center (2021), 41% of Americans report having experienced online harassment, with the majority of the cases taking place on social media, while 1 in 4 teens in the UK reported having seen hateful messages (Ofcom, 2019). In Germany, 40% of internet users reported exposure to online hate speech (Geschke et al., 2019), while in Brazil the organization Safernet reported in 2022 a substantial rise in the reporting of hateful online crimes, including racism, xenophobia, misogyny, and religious intolerance. Intolerant content can silence marginalized voices and may turn social media platforms into hostile spaces, undermining democratic discourse, and leading people to abandon them as places for discussion. However, limited research has investigated the negative effects of being targeted or exposed to online intolerance. In this paper, we tackle this important research gap and investigate the potential effects of exposure to intolerant speech in four countries where online intolerance is pervasive. The four countries in our sample represent democracies with highly active internet users and feature a high level of political animosity towards disadvantaged groups, such as immigrants, LGBTQ+, and other minorities. Notably, far-right parties and politicians have weaponized discourse towards minorities in recent electoral cycles across all these countries. These countries also differ in how they regulate online speech, with Germany and Brazil having more restrictive legislation, and the UK and the US as the least regulated regimes.
We leverage pre-registered survey experiments on large samples (N = 2,000 per country) constructed to mirror the adult population on key characteristics in these countries to examine perceptions of and reactions to intolerant online discourse. Our experiments manipulate the target (women, LGBT), tone (civil, uncivil), and type (discriminatory, hateful, threatening) of intolerant online discourse, using realistic mock-ups of social media posts. vary based on personal traits, political attitudes, and experiences with online toxicity, as well as being a member of a 'targeted' identity group. We also observe whether these effects are consistent across different countries and contexts.
We expect participants to respond differently to different types of intolerance featured in the treatments. For instance, discrimination often falls outside the scope of community standards and platform moderation rules, while hate speech and violent threats are likely perceived as more harmful (Stryker et al., 2016). As such, we expect participants to be more sensitive to hateful and threatening speech, so that perceptions of intolerance will differ by type, with discrimination being perceived as less problematic than the other types (H1). We also expect the tone of the message to have an impact, with intolerant discourse that is uncivil being perceived and judged more harshly than intolerant discourse presented in a civil fashion (H2). These differences are also likely to shape intentions to react to posts—e.g., responding, blocking, or reporting, as well as reducing engagement with political discussions (H3). Finally, we expect participants exposed to the treatments to report higher support for content moderation practices in the uncivil condition (H4). Considering the position of participants as targets or bystanders, we expect participants' identification as a member of the targeted groups to be more affected by our treatments, regardless of the tone of the treatment (H5).

References are suppressed due to the word limit.

Authors