Leak Reveals Facebook’s Content Moderation Policies

May 23, 2017 - 3 minutes read

Tasked with monitoring content generated from 2 billion users from a wide range of cultural backgrounds, Facebook’s moderators have a tough, maybe even impossible, job. These reviewers often have less than 10 seconds to decide whether or not content is appropriate, leaving them no time for nuance. The Guardian recently obtained more than 100 internal training materials that outline the social media platform’s guidelines for monitoring content. As iPhone app developers would expect, the leaked rules have stirred up quite the controversy.

As in any debate about censorship, there are the free speech people who are concerned about the power and authority of Facebook’s moderators and the critics worried that the guidelines don’t go far enough. The leaked policies try to account for things like context and intent, but by their very nature these kinds of rules are not going to satisfy everyone (or perhaps anyone). Many are surprised by what is permitted, including images of non-sexual child abuse (as long at its not celebratory), animal abuse, and live-streamed self-harm, all protected if they are in the spirit of spreading awareness. Generic or non-credible threats are allowed as expressions of anger, but threatening language directed at a politician is strictly prohibited. Chicago iPhone app developers who delve into the training materials will have deeper sympathy for the headache-inducing work Facebook’s content reviewers do on a daily basis.

Even with the aid of software that catches some of the troublesome content before it posts, the moderators face a task that must often seem futile. The media has been all over Facebook’s failures to properly police hate speech, fake news, revenge porn, and videos of murder and suicide, and these leaked documents prove that the social media giant still has a way to go in addressing these issues. The company recently hired 3,000 new reviewers, but many app developers believe that in order to truly tackle these problems, Facebook is going to need even more sophisticated software to help sort out offensive content. But can an algorithm really determine the context of content? Would this software be able to tell satire apart from hate speech? These are the questions that are keeping Facebook’s execs up at night.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , ,