Reducing Exposure To Misinformation: Evidence from WhatsApp in Brazil

August 16, 2024  ·   Analysis

Deactivating multimedia on WhatsApp in Brazil consistently reduced exposure to online misinformation during the pre-election weeks in 2022, but did not impact whether false news was believed, or reduce polarization.

WhatsApp Messenger Splashscreen.

Credit: Mika Baumeister

This article was originally published at VoxDev.

2024 has been called the year of elections. With more than 70 countries, representing half the world’s population, heading to the polls, many worry that online misinformation could foster discord and undermine election integrity. Much of the news coverage about elections and online misinformation focuses on Western democracies, including the United States and European countries. Similarly, research about the online information environment focuses on the social media platforms popular in these nations, such as Twitter and Facebook.

However, in most of the world — particularly in the Global South, where there have been national elections this year in India, South Africa, Mexico, and Venezuela, to name a few — misinformation often reaches citizens through social messaging apps, particularly WhatsApp (Newman et al. 2021, Resende et al. 2019, Valenzuela et al. 2021). Although concerns about misinformation in Global South democracies receives attention from media and local policymakers, there’s a scarcity of scholarly knowledge about how misinformation spreads on these messaging platforms.

We address this knowledge gap by deploying a field experiment with WhatsApp users during the 2022 presidential election in Brazil, a country where WhatsApp use is especially prevalent. Our goal for this study was to understand how users are exposed to misinformation on WhatsApp — and what effects this exposure has on political attitudes and beliefs.

Spreading misinformation online: Global North vs Global South

Misinformation spread and exposure is commonly linked to conventional feed-based platforms like Twitter and Facebook. There are a variety of arguments about how this works. For example, some state that social media has led to a fragmentation of the media market and a reduction in the quality of online news sources (Aruguete et al. 2021, Tokita et al. 2021). In this context, scholars argue that social media facilitates users’ exposure to online rumors, which are more likely to be shared by communities of like-minded partisans, generating downstream effects on belief in misinformation rumors, levels of polarisation, and outgroup animosity (Osmundsen et al. 2021, Rathje et al. 2021). These effects are then magnified by the dynamics of the platforms, with previous research showing evidence that false information spreads faster than true information on social media  (Del Vicario et al. 2016).

However, the type of social media platforms predominantly used in the Global South are fundamentally different, introducing novel challenges to understanding the spread of online information in these countries. With nearly 2 billion active global users, WhatsApp, an encrypted messaging app that allows both one-to-one and group communications, is the leading social media platform in the Global South. Users rely on WhatsApp to communicate with friends, conduct business, and consume news, including content about politics and elections.

How information spreads — and the type of information that gets shared — is very different on WhatsApp compared to feed-based platforms. While the latter serves users content based on a mix of algorithms and who they follow, content propagation on WhatsApp depends more heavily on users’ decisions to forward content, both in group settings and in one-to-one chats. As a result, the type of content users share is also different. Feed-based platforms are dominated by text-based information, which come from content producers such as journalists, news organizations, politicians, and influencers. On WhatsApp, without news feeds or creators producing content for their accounts’ followers, the most viral information travels across chats in a quasi-anonymous format, lacking any metadata, and is typically crafted for easy distribution across different groups and chat conversations. As a consequence, instead of text-based news articles and posts, it is easy-to-share multimedia content (such as videos, images, audio, and GIFs) that plays an important role in the WhatsApp’s informational environment.

Our experiment: WhatsApp multimedia deactivation in Brazil

Media coverage suggests that WhatsApp is a primary channel for misinformation exposure in the Global South. However, no studies have drawn causal links between the platform, user exposure, and belief in misinformation. Motivated by these popular claims, we ran a field experiment (Ventura et al. 2024) with more than 700 WhatsApp users during the weeks leading up to the 2022 Brazilian presidential election, a period defined by a polarised political climate and a high volume of election-related content circulating on WhatsApp.

The design was inspired by related social media deactivation experiments (Allcott et al. 2020, Asimovic et al. 2021). Previous studies with Facebook have explored how deactivating users’ accounts for a set period of time affects their political knowledge, levels of polarisation, and well-being. In this case, since multimedia content is a prime driver of misinformation on WhatsApp, we randomly assigned a set of Brazilian WhatsApp users to turn off their automatic download of multimedia content for a three-week period leading up to the election. This meant that all media — i.e. videos, images, and audio — would not be automatically downloaded and thus not be viewable to these users, unless participants purposefully clicked on them. We instruct them not to do so, and monitor compliance closely. A few days after the election, participants completed a survey designed to measure their exposure to misinformation, belief in that misinformation, levels of polarisation, and their subjective well-being.

What were the impacts of this intervention on misinformation?

We found that deactivating multimedia on WhatsApp consistently reduced exposure to online misinformation during the pre-election weeks (0.38 standard deviation reduction). There was also a reduction in true news exposure, but at a considerably lower rate than the reduction in misinformation exposure.

However, while the deactivation significantly reduced exposure to false news, consistent with previous studies (Allcott et al. 2020), we found no difference in whether participants believed that false news. The experiment also found no changes in levels of polarisation or on subjective well-being, suggesting that a short-term reduction in exposure to potentially polarising political content might not address concerns about rising levels of polarisation.

Interestingly, the lack of effects on accuracy assessments was not uniform across all users. Those who received political content on WhatsApp multiple times a day did improve their capacity to identify false rumors. Conversely, those who rarely received political news via WhatsApp became significantly worse at identifying misinformation. Similar effects have been detected in prior studies and speak to the importance of investing more in research that oversamples heavy online information consumers.

Takeaways for efforts to reduce misinformation

We wish to conclude by presenting some important scientific and policy takeaways from our study. First, we come to similar conclusions on a different platform in a different country as  recent studies of the impacts of Facebook and Instagram during the U.S. 2020 election (Guess
et al. 2023a, Guess et al. 2023b Nyhan et al. 2023, Allcott et al. 2024) have: that simple adjustments to how users engage with social media platforms are not sufficient on their own to impact important political attitudes. This may be disappointing from the perspective of those who might have hoped to find technical fixes to large societal problems, but also probably an important reset of baseline assumptions about the limits of such proposed changes.

In the same vein, similar to Guess et al. (2023b), we do, however, find that a fairly simple tweak — in our case, increased friction to accessing multimedia — does reduce exposure to misinformation online and affects accuracy beliefs for heavy WhatsApp users. The caveat, though, is that like Guess et al. (2023b) found on Facebook, we also find that this reduction in exposure to misinformation is accompanied by a reduction in exposure to political news generally. Furthermore, as in other similar studies (Aslett et al. 2022), our findings offer yet more evidence that even when interventions do not have an impact on the population writ large, there is a possibility that they may impact a sample of interest at the tails of the distribution.

Lastly, our study shows the continued importance of moving beyond what we know about the impact of social media usage on politics in the United States (Tucker et al. 2018). As the vast majority of social media users reside outside of the United States, we must continue subjecting what we think we know about social media to different theoretical approaches and empirical tests, spanning not only various geographic contexts, but also different platform types that enjoy tremendous popularity elsewhere. Our research on WhatsApp use in Brazil is an important step in that direction, but much more remains to be done in this regard.