With the rise of online social interactions, racial online harassment also rises. To test whether this behavior can be curbed, we designed accounts to apply social pressure to online harassers. We find that by promoting positive social norms it is possible to reduce the use of racial slurs by subjects.
Munger, Kevin. “Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment.” Political Behavior 39, no. 3 (2017): 629–49. https://doi.org/10.1007/s11109-016-9373-5
Nov 11, 2016
Area of Study
I conduct an experiment which examines the impact of group norm promotion and social sanctioning on racist online harassment. Racist online harassment de-mobilizes the minorities it targets, and the open, unopposed expression of racism in a public forum can legitimize racist viewpoints and prime ethnocentrism. I employ an intervention designed to reduce the use of anti-black racist slurs by white men on Twitter. I collect a sample of Twitter users who have harassed other users and use accounts I control (“bots”) to sanction the harassers. By varying the identity of the bots between in-group (white man) and out-group (black man) and by varying the number of Twitter followers each bot has, I find that subjects who were sanctioned by a high-follower white male significantly reduced their use of a racist slur. This paper extends findings from lab experiments to a naturalistic setting using an objective, behavioral outcome measure and a continuous 2-month data collection period. This represents an advance in the study of prejudiced behavior.
The rise of online social interaction has brought with it new opportunities for individuals to express their prejudices and engage in verbal harassment. Racist online harassment de-mobilizes minorities and opens unopposed expressions of racism in a public forum, which can legitimize racist viewpoints and prime ethnocentrism. Severe online harassment takes the form of explicit threats or the posting of personal information, forcing targets to modify their behavior out of fear for their immediate safety. Although all harassment can contribute to a toxic online community, this paper is specifically about racist harassment of white men against blacks. Is there a way to curb this online behavior?
To answer this question, we designed an experiment to reduce the use of anti-black racist slurs by white men on Twitter. First, we collected a sample of Twitter users who have harassed other users. To sanction the harasser, we create control “bots,” or accounts — varying the identities of the bots between in-group (white man) and out-group (black man). Through “bots,” social pressure is applied to an individual rather than outright censorship.
Approaches that operate through promoting positive social norms, like the one employed in this paper, may offer a better way to develop online communities that are less toxic. We find that subjects who were sanctioned by a high-follower white male significantly reduced their use of racist slurs.