Does Reducing Exposure to Image and Video Content on Messaging Apps Reduce the Impact of Misinformation?

July 8, 2025  ·   News

New research reveals that limiting multimedia downloads on WhatsApp significantly reduced users' exposure to false political rumors, highlighting how misinformation may spread differently on social media messaging platforms.

A Brazilian citizen inserts a ballot into a ballot box. Brazil flag in the background.

Credit: Adobe Stock

Much of the news coverage about elections and online misinformation focuses on Western democracies, including the United States and European countries. Similarly, research about the online information environment focuses on the social media platforms popular in these nations, such as Twitter and Facebook. However, in most of the world — particularly in the Global South — misinformation often reaches citizens through social messaging apps, mainly through WhatsApp. Although concerns about misinformation in Global South democracies receive attention from media and local policymakers, there is a scarcity of scholarly knowledge about how misinformation spreads on these messaging platforms. 

Our new CSMaP study, accepted for publication in the Journal of Politics, addresses this knowledge gap by deploying a field experiment with WhatsApp users during the 2022 presidential election in Brazil, a country where WhatsApp use is especially prevalent. Our goal for this study was to understand how users are exposed to misinformation on WhatsApp and what effects this exposure has on political attitudes and beliefs.

Our Experiment: WhatsApp Multimedia Deactivation in Brazil

Media coverage suggests that WhatsApp is a primary channel for misinformation exposure in the Global South. However, no studies have drawn causal links between the platform, user exposure, and belief in misinformation. Motivated by these popular claims, we ran a field experiment with more than 700 WhatsApp users during the weeks leading up to the 2022 Brazilian presidential election, a period defined by a polarized political climate and a high volume of election-related content circulating on WhatsApp.

How information spreads — and the type of information that gets shared — is very different on WhatsApp compared to feed-based platforms. While the latter serves users content based on a mix of algorithms and who they follow, content propagation on WhatsApp depends more heavily on users’ decisions to forward content, both in group settings and in one-to-one chats. As a result, the type of content users share is also different. Feed-based platforms are dominated by text-based information, which come from content producers such as journalists, news organizations, politicians, and influencers. On WhatsApp, without news feeds or creators producing content for their accounts’ followers, the most viral information travels across chats in a quasi-anonymous format, lacking any metadata, and is typically crafted for easy distribution across different groups and chat conversations. As a consequence, instead of text-based news articles and posts, it is easy-to-share multimedia content (such as videos, images, audio, and GIFs) that drive WhatsApp’s informational environment.

This design was inspired by related social media deactivation experiments. Previous studies with Facebook have explored how deactivating users’ accounts for a set period of time affects their political knowledge, levels of polarization, and well-being. In this case, since multimedia content is the primary format of  misinformation circulation on WhatsApp, we randomly assigned a set of users to turn off their automatic download of multimedia content for a three-week period leading up to the election.WhatsApp includes this feature so that people can  minimize their data usage by not automatically downloading images and videos. As such,  all media — i.e., videos, images, and audio — would not be automatically downloaded and thus not be viewable to these users, unless they purposefully clicked on them. We instructed them not to do that, and then we monitored compliance closely. A few days after the election, participants completed a survey designed to measure their exposure to misinformation, belief in that misinformation, levels of polarization, and subjective well-being.

What We Found

We found that deactivating multimedia on WhatsApp consistently reduced the recall of false rumors that circulated widely online during the pre-election weeks (0.38 standard deviation reduction). There was also a reduction in the recall of true news headlines, but at a considerably lower rate than the reduction in misinformation exposure. However, while the deactivation significantly reduced self-reported exposure to false news, consistent with previous studies, we found no difference in whether participants believed these false news. The experiment also found no changes in levels of polarization or on subjective well-being, suggesting that a short-term reduction in exposure to potentially polarizing political content might not address concerns about rising levels of polarization.

Interestingly, the lack of effects on accuracy assessments was not uniform across all users. Those who previously reported receiving political content on WhatsApp multiple times a day did improve their capacity to identify false rumors. Conversely, those who rarely received political news via WhatsApp became significantly worse at identifying misinformation. Similar effects have been detected in prior studies and speak to the importance of investing more in research that oversamples heavy online information consumers.

Why This Research Matters 

We wish to conclude by presenting some important scientific and policy takeaways from our study. First, we come to similar conclusions on a different platform in a different country as  recent studies of the impacts of Facebook and Instagram during the U.S. 2020 election have: simple adjustments to how users engage with social media platforms are not sufficient on their own to impact important political attitudes. This may be disappointing from the perspective of those who might have hoped to find technical fixes to large societal problems, but it is also an important reset of baseline assumptions about the limits of such proposed changes. 

In that vein, similar to Guess et al., we do, however, find that a fairly simple tweak — in our case, increased friction to accessing multimedia — does reduce exposure to misinformation online and affects accuracy beliefs for heavy WhatsApp users. The caveat, though, is that like Guess et al. found on Facebook, we also find that this reduction in exposure to misinformation is accompanied by a reduction in exposure to political news generally. Furthermore, as in other similar studies, our findings offer yet more evidence that even when interventions do not have an impact on the population writ large, there is a possibility that they may impact a sample of interest at the tails of the distribution.

Lastly, our study shows the continued importance of moving beyond what we know about the impact of social media usage on politics in the United States. As the vast majority of social media users reside outside of the United States, we must continue subjecting what we think we know about social media to different theoretical approaches and empirical tests, spanning not only various geographic contexts, but also different platform types that enjoy tremendous popularity elsewhere. Our research on WhatsApp use in Brazil is an important step in that direction, but much more remains to be done in this regard.