The Invisible Labor of Content Moderation

Effectively moderating a social computing system isn’t just the right thing to do from an ethical standpoint — it’s in a platform’s best interest. Moderation shapes the platform: as a tool, as an institution for discussion, and as a cultural phenomenon.

But perhaps the trickiest part of this discourse on moderation is that it is inherently hard to examine. Platforms love to be vocal about how much content they support and welcome — but they’re also typically quiet about how much content they like to remove. These sites play down the ways in which they intervene with user-generated content: they don’t talk much about the people they ban and suspend, and they don’t tend to inform their audience about how they algorithmically prioritize some posts over others. These decisions shape the way people interact on a platform, and they deserve careful attention.

  1. Platforms can choose to moderate if they wish without becoming liable.

Paid Moderation

With paid moderation, a third party reviews any claims, which helps avoid brigading and supports a more calibrated and neutral evaluation of the content. Facebook is just one example of a platform that utilizes this method, employing about 15,000 content moderators directly or indirectly. If there are three million posts to review each day, that equates to 200 per person: 25 every hour in an eight-hour shift, or under 150 seconds per post.

Community Moderation

We see community moderation in sites like Reddit, Twitch, and Steam. On Reddit, for example, users moderate specific subreddits, removing content that breaks the rules. One user, who moderates over 60 subreddits, considers moderating “a glorified janitor’s job, and there is a unique pride that janitors have… When I’m finished for the day I can stand back and admire the clean and functioning subreddit, something a lot of people take for granted.”

Algorithmic Moderation

When Facebook, YouTube, Twitter, and other tech companies sent workers home to protect them from the coronavirus, they ran into a host of new challenges regarding content moderation: as these platforms began relying more heavily on automated systems to flag content, they began seeing more and more posts erroneously marked as spam because of algorithm weaknesses. Some content moderation can’t be done outside the office due to privacy and security reasons — consequently, these tech companies were suddenly dealing with a lot more information to moderate, and a lot less staff.

Investing @BessemerVP & studying Symbolic Systems @Stanford. Previously @ChapterOne. Follow me @gaby_goldberg.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store