Turning the Virtual Tables: Government Strategies for Addressing Online Opposition with an Application to Russia

In this paper, we examine how authoritarian and competitive authoritarian regimes responded to the possible threat of social media; in particular, we analyze over 14 million Russian-language tweets and reveal the diverse set of tools and functions provided by bots.

Abstract

We introduce a novel classification of strategies employed by autocrats to combat online opposition generally, and opposition on social media in particular. Our classification distinguishes both online from offline responses and censorship from engaging in opinion formation. For each of the three options — offline action, technical restrictions on access to content, and online engagement — we provide a detailed account for the evolution of Russian government strategy since 2000. To illustrate the feasibility of researching online engagement, we construct and assess tools for detecting the activity of political "bots," or algorithmically controlled accounts, on Russian political Twitter, and test these methods on a large dataset of politically relevant Twitter data from Russia gathered over a year and a half.

Background

When social media first burst onto the political scene the overwhelming emphasis was on its potential for allowing citizens to become organized outside of traditional hierarchical arrangements. Nowhere was this more important than in authoritarian and competitive authoritarian regimes, where social media was posited as a way to level the playing field when traditional institutions were largely under the control of the state. If we want to label this original interpretation of the intersection of Social Media and Politics as “Social Media 1.0,” then “Social Media 2.0” could be the story of how these same authoritarian and competitive authoritarian regimes woke up to the possibilities of social media as a threat to their regimes and how they responded.

Study

In this paper, we examine the dynamics of “Social Media 2.0.” First, we define the baseline strategies that autocrats employ to combat opposition on social media, such as: offline actions, technical restrictions on access to content, and online engagement. Then we track this classification system to the evolution of Russia’s Internet policy since the first election of Putin as Russian President in 2000. We also identify one particular form of online engagement, the use of political bots, as a fruitful area for future academic research by proving that it is indeed possible to use digital forensic techniques to observe and analyze various forms of online engagement.

Results

After analyzing over 14 million tweets by approximately 1.3 million accounts who use Russian as their primary language, we identified accounts with no friends, low followers-to-friends ratio, low entropy of inter-tweeting time intervals, and many identical tweets. Among those, we selected 512 highly-active accounts for verification by a team of human coders, ensuring high intercoder reliability. A very conservative estimate suggests that 77 percent of these accounts were indeed bots. (1 percent were cyborgs who share more characteristics with bots than with humans, and further 18.5 percent were classified as unclear because they resemble both bots and cyborgs as well as commercial spam accounts; only 1.4 percent were human). Our classification of bot accounts by content as well as by political inclination demonstrates that different types of bots perform different functions and that a combination of different techniques is needed to detect them. Although bots are but one weapon in the arsenal of politicians trying to impact the online discourse, it appears that even this group is an overarching category for a diverse set of tools.