The Invisible Labor of Content Moderation

Gaby Goldberg
9 min readJul 27, 2020

--

Have you ever taken a look at Facebook’s content policies? Or Twitter’s? Probably not — they’re decently hard to find, and most social media platforms don’t necessarily go out of their way to advertise their moderation policies. But this silence around moderation illuminates a fascinating dichotomy: moderation is the actual commodity of any social computing system. It classifies the kinds of content allowed on a given platform, and it has downstream influences on how people use the platform to interact. Moderation shapes social norms, public discourse, and cultural production — so why does it receive so little scrutiny?

The benefits of social media platforms are obvious. They foster connection, community, and opportunity — Supreme Court Justice Anthony Kennedy even went so far to call these sites the “modern public square.” But alongside this value, the perils of social computing platforms are apparent and painfully underdiscussed. We’re all aware of the hateful, violent, pornographic, and otherwise obscene content that can stain our Internet. It’s only getting worse — as these systems get bigger and support wider user bases, proportionally less content can get attention. As a result, companies have resorted to taking additional moderation measures, like the following:

It’s time to refocus the conversation on social computing systems and more carefully scrutinize the content moderation policies they implement. Moderation policies and social interactions define the system just as much as the technical infrastructure. The two components are interrelated and both responsible — as such, the system’s technical elements are not enough to determine the platform’s behavior or outcomes.

Effectively moderating a social computing system isn’t just the right thing to do from an ethical standpoint — it’s in a platform’s best interest. Moderation shapes the platform: as a tool, as an institution for discussion, and as a cultural phenomenon.

But perhaps the trickiest part of this discourse on moderation is that it is inherently hard to examine. Platforms love to be vocal about how much content they support and welcome — but they’re also typically quiet about how much content they like to remove. These sites play down the ways in which they intervene with user-generated content: they don’t talk much about the people they ban and suspend, and they don’t tend to inform their audience about how they algorithmically prioritize some posts over others. These decisions shape the way people interact on a platform, and they deserve careful attention.

In public, platforms generally frame themselves as unbiased and noninterventionist, but it’s much more complex than that. Moderation is hard. It’s time and resource-intensive, and it’s unclear what the standards should be (spoiler alert: it’s not just about content that’s illegal). Users are also just as quick to condemn the intrusion of platform moderation as they are to condemn its absence — in 2016, for example, far-right political commentator Milo Yiannopoulos’ ban from Twitter led to an outcry from his supporters that the ban was an attack on free speech. In many cases, just one moderation misstep can result in enough public outcry to overshadow the other thousands of silent successes.

But wait — don’t we have free speech under the First Amendment? Yes, but social computing platforms are not Congress. By law, they don’t have to allow all speech: the safe harbor provision (found in Section 230 of U.S. telecommunication law) grants platforms with user-generated content the right, but not the responsibility, to restrict certain kinds of speech.* This safe harbor law has two intertwined components:

  1. Platforms are not liable for the content posted to them (ex. If someone posts a hateful comment towards me on Twitter, I can’t sue Twitter).
  2. Platforms can choose to moderate if they wish without becoming liable.

Social computing sites are then tasked with taking “the First Amendment, this high-minded, lofty legal concept and [converting] it into an engineering manual that can be executed every four seconds for any piece of content from anywhere on the globe,” as explained by this Radiolab podcast. “When you’ve got to move that fast, sometimes justice loses.”

*[Brief Footnote: The interpretation of Section 230 is currently being discussed. On July 27th, the Commerce Department petitioned the Federal Communications Commission to narrow the liability protections of online companies. The FCC doesn’t regulate social media companies per Section 230, but this petition includes a request that regulatory requirements be imposed. This is part of what President Trump pitched in May as a broader “crackdown on anti-conservative bias.”]

We’ve learned that moderation is an incredibly difficult (and often thankless) task. On top of this, the safe harbor provision tells us that platforms don’t have to moderate sites if they don’t want to (perhaps this is why Mark Zuckerberg held steadfast in 2016 that Facebook is not a media company, so as not to be burdened with the legal obligations that apply to media companies vs. social platforms). So… why do it?

There are myriad reasons to aim for a well-moderated site: a platform’s environment and mood can influence a user’s propensity to engage in a certain kind of behavior. But beyond this, there can be economic consequences when platforms lose users who are driven away by abuse or uncomfortable content. In many cases, this is more formally known as an Eternal September: the permanent destruction of a community’s norms (and, thus, an inability to effectively moderate) due to an influx of newcomers.

Content moderation is necessary, but it’s incredibly difficult to execute effectively. There seem to be three “imperfect solutions:” paid moderation, community moderation, and algorithmic moderation. In each of these, what works? What doesn’t, and what can we learn?

Paid Moderation

With paid moderation, a third party reviews any claims, which helps avoid brigading and supports a more calibrated and neutral evaluation of the content. Facebook is just one example of a platform that utilizes this method, employing about 15,000 content moderators directly or indirectly. If there are three million posts to review each day, that equates to 200 per person: 25 every hour in an eight-hour shift, or under 150 seconds per post.

A demanding job like this can result in major emotional trauma for moderators who spend days on end reviewing troubling content (related: Facebook paid $52 million to content moderators suffering from PTSD). Additionally, evaluators may only have seconds to make a snap judgement about a specific piece of content, so there’s still room for error.

Community Moderation

We see community moderation in sites like Reddit, Twitch, and Steam. On Reddit, for example, users moderate specific subreddits, removing content that breaks the rules. One user, who moderates over 60 subreddits, considers moderating “a glorified janitor’s job, and there is a unique pride that janitors have… When I’m finished for the day I can stand back and admire the clean and functioning subreddit, something a lot of people take for granted.”

Stronger actions have worked on Reddit, too: Reddit’s 2015 ban of two subreddits due to violations of anti-harassment policy resulted in accounts leaving the platform entirely (or migrating to other subreddits, drastically reducing their hate speech). On Twitch, community moderators are responsible for removing content and banning users in real time. One study about the pro and anti-social behavior on Twitch showed that moderating content or banning users substantially decreases negative behaviors in the short term.

Community moderation leverages its users’ intrinsic motivation, and local experts of a given corner of the Internet are more likely to have the necessary context to make difficult calls. That said, however, community moderators can often feel bitter that they don’t get the recognition they deserve, and can resent that the platform seems to profit off their free labor. On top of all this, community moderation can vary in quality: it’s not necessarily consistent, fair, or just.

These issues are well articulated in this New York Times article, which writes that moderators are “forces for stability and civility in the raucous digital realm. Or that is, they’re supposed to be.” Community moderators can have extreme power on their respective platforms, and they can wield it with often profound unpredictability. During the infamous 2015 Reddit Revolt, moderators upset with a decision made at Reddit’s corporate level chose to shut down their forums, which collectively garnered millions of visits each day. These moderators wanted to voice their feeling that the company’s leadership “[did] not respect the work put in by thousands of unpaid volunteers.” A week after the Reddit Revolt, Ellen Pao — the company’s interim chief executive — resigned.

Algorithmic Moderation

When Facebook, YouTube, Twitter, and other tech companies sent workers home to protect them from the coronavirus, they ran into a host of new challenges regarding content moderation: as these platforms began relying more heavily on automated systems to flag content, they began seeing more and more posts erroneously marked as spam because of algorithm weaknesses. Some content moderation can’t be done outside the office due to privacy and security reasons — consequently, these tech companies were suddenly dealing with a lot more information to moderate, and a lot less staff.

Every large social computing system utilizes algorithmic moderation in some form. Perhaps algorithmic moderation’s biggest strength is that the system can act quickly to remove content, before people are hurt or otherwise negatively affected. However, these systems clearly aren’t perfect — they make embarrassing errors, often unintended by the creators of the algorithm. These errors are also often interpreted by users as intentional platform policy. Additionally, even if a perfectly fair, accountable algorithm were possible, culture would evolve and training data would inevitably become out of date.

These solutions — paid, community, and algorithmic moderation — each have their own strengths and weaknesses. To account for the gaps, many social computing systems use multiple tiers: Tier 1 is typically algorithmic moderation for the most common and easy-to-catch problems, while uncertain judgements move to human moderators in Tier 2. These human moderators can be paid (like Facebook) or community (like Reddit), depending on the platform — they can monitor flagged content (or, sometimes, all new content that comes in), or review an algorithmically curated queue.

From all this, what can we learn? Content moderation is how platforms shape user participation into a deliverable experience. Social computing systems moderate (through removal and suspension), recommend (through news feeds and personalized suggestions), and curate (through featured, front-page content). These dynamic decisions fine-tune the way we, as users, experience different online platforms and communities.

It should be every platform’s goal to push the world in the direction of making it a more open and transparent place — but that doesn’t mean those platforms should run fully unmoderated. Instead, social computing systems will improve as their content moderation improves: “More human moderators. More expert human moderators. More diverse human moderators. More transparency in the process.” It’s our obligation as users of these platforms to understand and discuss the ways in which platforms are moderated, by whom, and to what ends. It’s both our responsibility and our privilege to work with online platforms to answer these difficult questions.

--

--

Gaby Goldberg
Gaby Goldberg

Written by Gaby Goldberg

Investor at TCG Crypto. Alum @Stanford. Follow me @gaby_goldberg.

Responses (1)