Washington Post Megan A. Brown and Maggie MacDonald January 14, 2022 Rep. Marjorie Taylor Greene (R-Ga.) tweeted misinformation about coronavirus vaccines, and after repeated offenses like this, Twitter permanently suspended her personal account. Similarly, Facebook briefly suspended Greene's account after she made the same post on their platform. Our researchers further discuss deplatforming politicians and whether that type of intervention works.
Slate Megan A. Brown and Tessa Knight January 13, 2022 Twitter announced they would halt Trends in Ethiopia, following the country's recent violence and year-long civil war. This intervention was intended to reduce the risk of coordinating further violence or harm. Our research found no discernible change in the volume of tweets or the prevalence of toxic and threatening speech, meaning the Twitter intervention may not have worked as intended. Our researchers go on to explain this phenomenon.
Tech Policy Press Megan A. Brown and Tessa Knight January 11, 2022 The escalating conflict in Ethiopia, has left thousands dead and displaced millions more. Previous reports and new documents leaked by Facebook whistleblower, Frances Haugen, illustrate how social media is fueling ethnic-based violence in Ethiopia. Our researchers discuss how Twitter and other social media interventions are not having the intended effect.
Popular Science Charlotte Hu November 22, 2021 Hate speech is a sprawling, toxic problem that plagues social media. Many tech companies have been thinking up new ways to stem its spread—but it’s a difficult and complicated task. Our new research offers an interesting solution.
A polite warning on Twitter can help reduce hate speech by up to 20 percent because messages that ‘appear legitimate in the eyes of the target’ are the most effective, study finds2021-11-22T21:47:45+00:00
Daily Mail Dan Avery November 22, 2021 Gently warning Twitter users that they might face consequences if they continue to use hateful language can have a positive impact, according to our new research. If the warning is worded respectfully, the change in tweeting behavior is even more dramatic.
Tech Policy Press Justin Hendrix November 22, 2021 Twitter has long had a problem of racist abuse, which came to the fore again this summer when users spread hate targeting the England football team following the Euro 2020 Final. Our newest study examines this phenomena and whether warning users of their potential hateful language can reduce hate speech on the platform.
Engadget Karissa Bell November 22, 2021 A set of carefully-worded warnings directed to the right accounts could help reduce the amount of hate on Twitter. That’s the conclusion of our new research examining whether targeted warnings could reduce hate speech on the platform.
Protocol Issie Lapowsky November 22, 2021 Our researchers found that warning Twitter users that someone they follow has been suspended — and they could be next — cuts down on hate speech by 10-20%.
Washington Post Cristiano Lima October 29, 2021 Silicon Valley leaders are already plotting out a course for the next generation of the Web, like the so-called “metaverse.” In a series of essays compiled by the Knight Foundation, founding figures in the world of technology lay out their “Lessons from the First Internet Ages,” ahead of a conference on the topic.
Rolling Stone Ryan Bort October 27, 2021 The company released a study last Thursday finding a “statistically significant difference favoring the political right wing,” when it comes to which tweets are amplified. This means a tweet from Ted Cruz is more likely to come across your timeline, than one from Dick Durbin, because Twitter thinks you’re more likely to engage with it. Our research helps explain this ratio'd algorithm.