Twitter banned Marjorie Taylor Greene. That may not hurt her much.

2022-01-14T18:36:33+00:00

Washington Post Megan A. Brown and Maggie MacDonald January 14, 2022 Rep. Marjorie Taylor Greene (R-Ga.) tweeted misinformation about coronavirus vaccines, and after repeated offenses like this, Twitter permanently suspended her personal account.  Similarly, Facebook briefly suspended Greene's account after she made the same post on their platform. Our researchers further discuss deplatforming politicians and whether that type of intervention works.

Twitter banned Marjorie Taylor Greene. That may not hurt her much.2022-01-14T18:36:33+00:00

What Happened When Twitter Halted Trending Topics in Ethiopia

2022-01-13T19:10:52+00:00

Slate Megan A. Brown and Tessa Knight January 13, 2022 Twitter announced they would halt Trends in Ethiopia, following the country's recent violence and year-long civil war. This intervention was intended to reduce the risk of coordinating further violence or harm. Our research found no discernible change in the volume of tweets or the prevalence of toxic and threatening speech, meaning the Twitter intervention may not have worked as intended. Our researchers go on to explain this phenomenon.

What Happened When Twitter Halted Trending Topics in Ethiopia2022-01-13T19:10:52+00:00

Trendless Fluctuation? How Twitter’s Ethiopia Interventions May (Not) Have Worked

2022-01-11T16:47:27+00:00

Tech Policy Press Megan A. Brown and Tessa Knight January 11, 2022 The escalating conflict in Ethiopia, has left thousands dead and displaced millions more. Previous reports and new documents leaked by Facebook whistleblower, Frances Haugen, illustrate how social media is fueling ethnic-based violence in Ethiopia. Our researchers discuss how Twitter and other social media interventions are not having the intended effect.

Trendless Fluctuation? How Twitter’s Ethiopia Interventions May (Not) Have Worked2022-01-11T16:47:27+00:00

Polite warnings are surprisingly good at reducing hate speech on social media

2021-11-22T21:51:14+00:00

Popular Science Charlotte Hu November 22, 2021 Hate speech is a sprawling, toxic problem that plagues social media. Many tech companies have been thinking up new ways to stem its spread—but it’s a difficult and complicated task. Our new research offers an interesting solution.

Polite warnings are surprisingly good at reducing hate speech on social media2021-11-22T21:51:14+00:00

A polite warning on Twitter can help reduce hate speech by up to 20 percent because messages that ‘appear legitimate in the eyes of the target’ are the most effective, study finds

2021-11-22T21:47:45+00:00

Daily Mail Dan Avery November 22, 2021 Gently warning Twitter users that they might face consequences if they continue to use hateful language can have a positive impact, according to our new research. If the warning is worded respectfully, the change in tweeting behavior is even more dramatic.

A polite warning on Twitter can help reduce hate speech by up to 20 percent because messages that ‘appear legitimate in the eyes of the target’ are the most effective, study finds2021-11-22T21:47:45+00:00

“Suspension Warnings” Can Reduce Hate Speech on Twitter

2021-11-22T21:58:42+00:00

Tech Policy Press Justin Hendrix November 22, 2021 Twitter has long had a problem of racist abuse, which came to the fore again this summer when users spread hate targeting the England football team following the Euro 2020 Final. Our newest study examines this phenomena and whether warning users of their potential hateful language can reduce hate speech on the platform.

“Suspension Warnings” Can Reduce Hate Speech on Twitter2021-11-22T21:58:42+00:00

Personalized warnings could reduce hate speech on Twitter, researchers say

2021-11-22T16:45:10+00:00

Engadget Karissa Bell November 22, 2021 A set of carefully-worded warnings directed to the right accounts could help reduce the amount of hate on Twitter. That’s the conclusion of our new research examining whether targeted warnings could reduce hate speech on the platform.

Personalized warnings could reduce hate speech on Twitter, researchers say2021-11-22T16:45:10+00:00

Can Twitter warnings actually curb hate speech? A new study says yes.

2021-11-22T16:34:47+00:00

Protocol Issie Lapowsky November 22, 2021 Our researchers found that warning Twitter users that someone they follow has been suspended — and they could be next — cuts down on hate speech by 10-20%.

Can Twitter warnings actually curb hate speech? A new study says yes.2021-11-22T16:34:47+00:00

New research casts doubt on Twitter’s crowdsourced fact-checking plans

2021-10-29T14:26:11+00:00

Washington Post Cristiano Lima October 29, 2021 Silicon Valley leaders are already plotting out a course for the next generation of the Web, like the so-called “metaverse.” In a series of essays compiled by the Knight Foundation, founding figures in the world of technology lay out their “Lessons from the First Internet Ages,” ahead of a conference on the topic.

New research casts doubt on Twitter’s crowdsourced fact-checking plans2021-10-29T14:26:11+00:00

Twitter May Be Amping Conservative Accounts Because People Can’t Stop Dunking on Them

2021-10-29T14:18:12+00:00

Rolling Stone Ryan Bort October 27, 2021 The company released a study last Thursday finding a “statistically significant difference favoring the political right wing,” when it comes to which tweets are amplified. This means a tweet from Ted Cruz is more likely to come across your timeline, than one from Dick Durbin, because Twitter thinks you’re more likely to engage with it. Our research helps explain this ratio'd algorithm.

Twitter May Be Amping Conservative Accounts Because People Can’t Stop Dunking on Them2021-10-29T14:18:12+00:00