Reddit’s campaign against hate speech worked. Even when users stuck around, they started watching their words more carefully.
Freedom of speech has traditionally been an issue of government and human rights. But more and more companies are providing platforms where anyone can potentially contribute some sort of speech, typically text. And those companies are finding that they face many of the same issues governments have: how to balance giving users the ability to express themselves freely against the possibility that they'll post problematic content.
"Problematic" has various definitions. In some cases, it's truly dangerous, like incitements to violence or false medical advice. And companies may find that they don't want to be associated with expressions of racism, sexism, or other forms of prejudice. But can companies do anything if people use their service for broadcasting content that the companies don't approve of?
A new study answers that question with a clear "yes." Researchers looked at Reddit's fight against hate speech, which saw it ban a variety of subreddits in 2015. The analysis suggests that the regular users of these subreddits toned down their language as they moved to other areas on the site. And a number of users who wanted to continue to share offensive opinions simply went to other services, making them someone else's problem.
Back in 2015, Reddit announced that it would begin banning subreddits that "allow their communities to use the subreddit as a platform to harass individuals." Two of the obvious targets for this, r/fatpeoplehate and r/CoonTown, which targeted the obese and black people, were banned shortly afterward. The new study, published in the Proceedings of the ACM on Human-Computer Interaction, looks at what happened after the ban, using public information from Reddit for some detailed data mining
From Reddit's perspective, the aggressive moderation worked, as hate speech on the site dropped. While some evidence suggests that people who are truly dedicated to denigrating their fellow humans migrated to other sites, these sites don't have the prominence of Reddit, so the presence of hate speech there has less effect on the targets of these users' disdain.
Some of these individuals are undoubtedly present among the people who abandoned or deleted their accounts following the crackdown. But it's also likely that additional people quit as a form of protest against Reddit getting aggressive with its moderation for the first time. It's also possible that the people who quit Reddit included individuals who reveled in being offensive but weren't committed to the specific hatred that these subreddits pursued. The extent to which people abandoned Reddit because they couldn't pursue their racism or other forms of hatred can't be determined from this data.
But the key finding is that the people who stuck around changed their behavior, conforming their language to the norms of the new subreddits they became active in. Part of that may have been fear of seeing another hangout end up banned; that same fear may have motivated moderators of remaining subreddits to be more aggressive about policing language. But it's possible that a few of them figured out that attacking someone for their appearance wasn't socially acceptable in general.
https://arstechnica.co.uk/science/2017/09/reddits-campaign-against-hate-speech-worked/