- Home  /  
- Research  /  
- Academic Research  /  
- Misinformation Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil
Misinformation Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil
With news often dominated by online election misinformation in Western democracies, and research mainly focused on the social media platforms popular in these nations, such as X/Twitter and Facebook, we address the scarcity of scholarly knowledge on how misinformation spreads on social messaging apps, like WhatsApp, in the Global South.
Citation
Ventura, Tiago, Rajeshwari Majumdar, Jonathan Nagler, and Joshua A. Tucker. "Misinformation Beyond Traditional Feeds: Evidence from a WhatsApp Deactivation Experiment in Brazil." The Journal of Politics, (2025). https://doi.org/10.1086/737172
Date Posted
Jul 08, 2025
Area of Study
Abstract
In most advanced democracies, concerns about the spread of misinformation are typically associated with feed-based social media platforms like Twitter and Facebook. These platforms also account for the vast majority of research on the topic. However, in most of the world, particularly in Global South countries, misinformation often reaches citizens through social media messaging apps, particularly WhatsApp. To fill the resulting gap in the literature, we conducted a multimedia deactivation experiment to test the impact of reducing exposure to potential sources of misinformation on WhatsApp during the weeks leading up to the 2022 Presidential election in Brazil. We find that this intervention significantly reduced participants’ recall of false rumors circulating widely during the election. However, consistent with theories of mass media minimal effects, a short-term change in the information environment did not lead to significant changes in belief accuracy, political polarization, or well-being.
Background
Much of the news coverage about elections and online misinformation focuses on Western democracies, including the United States and European countries. Similarly, research about the online information environment focuses on the social media platforms popular in these nations, such as Twitter and Facebook. However, in most of the world — particularly in the Global South — misinformation often reaches citizens through social messaging apps, mainly through WhatsApp. Although concerns about misinformation in Global South democracies receive attention from media and local policymakers, there is a scarcity of scholarly knowledge about how misinformation spreads on these messaging platforms.
Our new CSMaP study, accepted for publication in the Journal of Politics, addresses this knowledge gap by deploying a field experiment with WhatsApp users during the 2022 presidential election in Brazil, a country where WhatsApp use is especially prevalent. Our goal for this study was to understand how users are exposed to misinformation on WhatsApp and what effects this exposure has on political attitudes and beliefs.
Our Experiment: WhatsApp Multimedia Deactivation in Brazil
Media coverage suggests that WhatsApp is a primary channel for misinformation exposure in the Global South. However, no studies have drawn causal links between the platform, user exposure, and belief in misinformation. Motivated by these popular claims, we ran a field experiment with more than 700 WhatsApp users during the weeks leading up to the 2022 Brazilian presidential election, a period defined by a polarized political climate and a high volume of election-related content circulating on WhatsApp.
How information spreads — and the type of information that gets shared — is very different on WhatsApp compared to feed-based platforms. While the latter serves users content based on a mix of algorithms and who they follow, content propagation on WhatsApp depends more heavily on users’ decisions to forward content, both in group settings and in one-to-one chats. As a result, the type of content users share is also different. Feed-based platforms are dominated by text-based information, which come from content producers such as journalists, news organizations, politicians, and influencers. On WhatsApp, without news feeds or creators producing content for their accounts’ followers, the most viral information travels across chats in a quasi-anonymous format, lacking any metadata, and is typically crafted for easy distribution across different groups and chat conversations. As a consequence, instead of text-based news articles and posts, it is easy-to-share multimedia content (such as videos, images, audio, and GIFs) that drive WhatsApp’s informational environment.
This design was inspired by related social media deactivation experiments. Previous studies with Facebook have explored how deactivating users’ accounts for a set period of time affects their political knowledge, levels of polarization, and well-being. In this case, since multimedia content is the primary format of misinformation circulation on WhatsApp, we randomly assigned a set of users to turn off their automatic download of multimedia content for a three-week period leading up to the election.WhatsApp includes this feature so that people can minimize their data usage by not automatically downloading images and videos. As such, all media — i.e., videos, images, and audio — would not be automatically downloaded and thus not be viewable to these users, unless they purposefully clicked on them. We instructed them not to do that, and then we monitored compliance closely. A few days after the election, participants completed a survey designed to measure their exposure to misinformation, belief in that misinformation, levels of polarization, and subjective well-being.
Results
We found that deactivating multimedia on WhatsApp consistently reduced the recall of false rumors that circulated widely online during the pre-election weeks (0.38 standard deviation reduction). There was also a reduction in the recall of true news headlines, but at a considerably lower rate than the reduction in misinformation exposure. However, while the deactivation significantly reduced self-reported exposure to false news, consistent with previous studies, we found no difference in whether participants believed these false news. The experiment also found no changes in levels of polarization or on subjective well-being, suggesting that a short-term reduction in exposure to potentially polarizing political content might not address concerns about rising levels of polarization.
Interestingly, the lack of effects on accuracy assessments was not uniform across all users. Those who previously reported receiving political content on WhatsApp multiple times a day did improve their capacity to identify false rumors. Conversely, those who rarely received political news via WhatsApp became significantly worse at identifying misinformation. Similar effects have been detected in prior studies and speak to the importance of investing more in research that oversamples heavy online information consumers.
Why This Research Matters
We wish to conclude by presenting some important scientific and policy takeaways from our study. First, we come to similar conclusions on a different platform in a different country as recent studies of the impacts of Facebook and Instagram during the U.S. 2020 election have: simple adjustments to how users engage with social media platforms are not sufficient on their own to impact important political attitudes. This may be disappointing from the perspective of those who might have hoped to find technical fixes to large societal problems, but it is also an important reset of baseline assumptions about the limits of such proposed changes.
In that vein, similar to Guess et al., we do, however, find that a fairly simple tweak — in our case, increased friction to accessing multimedia — does reduce exposure to misinformation online and affects accuracy beliefs for heavy WhatsApp users. The caveat, though, is that like Guess et al. found on Facebook, we also find that this reduction in exposure to misinformation is accompanied by a reduction in exposure to political news generally. Furthermore, as in other similar studies, our findings offer yet more evidence that even when interventions do not have an impact on the population writ large, there is a possibility that they may impact a sample of interest at the tails of the distribution.
Lastly, our study shows the continued importance of moving beyond what we know about the impact of social media usage on politics in the United States. As the vast majority of social media users reside outside of the United States, we must continue subjecting what we think we know about social media to different theoretical approaches and empirical tests, spanning not only various geographic contexts, but also different platform types that enjoy tremendous popularity elsewhere. Our research on WhatsApp use in Brazil is an important step in that direction, but much more remains to be done in this regard.