Viewpoint: Why Facebook Needs a Fresh Set of Eyes

Tim Maytom Viewpoint
This week has seen a significant leak of Facebook’s internal documents used to train employees who review material that has been deemed offensive, threatening, disturbing or abusive. The byzantine nature of the rules has seen Facebook facing fresh criticism, with renewed calls for transparency, and concerns voiced by everyone from free speech advocates to mental health charities.

The guidelines and rules revealed by the documents span a huge number of topics, from self-harm and animal abuse to holocaust denial and ‘revenge porn’. With 2bn users now communicating using Facebook’s platform (not counting those who may only use its other services like Instagram and WhatsApp), the question of how much responsibility the company has for content shared using its software looms larger than ever. Facebook users now constitute a significant portion of the global population, and humans don’t always act politely.

Even prior to the leak, Facebook was aware that it needed to do more. After several high-profile cases where murders were live-streamed on the social network, the company committed to hiring 3,000 additional staff who would review live video content, bringing its community operations team (which manages reported content) to 7,500.

That may sound like a large team dedicated to the problem, but the scale involved here is huge. 400 hours of video are uploaded to Facebook every minute, and in January 2017, nearly 54,000 potential cases of ‘revenge porn’ alone were reported, which just forms one subset of the content that staff have to review.

Beyond just the size of the problem, there is the difficulty in separating out abuse and hate speech from what is deemed “permissible” on the platform, which is struggling to balance its legal and ethical obligations with a commitment towards free speech.

“When millions of people get together to share things that are important to them, sometimes these discussions and posts include controversial topics and content,” said Facebook in one of the leaked guidelines. “We believe this online dialogue mirrors the exchange of ideas and opinions that happens throughout people’s lives offline, in conversations at home, at work, in cafes and in classrooms.”

However, Facebook also acknowledges in the documents that “people use violent language to express frustration online” and feel “safe to do so” on its platform – behaviour which means the line between acceptable frustration and legitimate threat becomes blurred and subject to interpretation.

Speaking to The Guardian, Carl Miller, research director at the Centre for the Analysis of Social Media at London-based thinktank Demos said that Facebook’s content review rules “might be the most important editorial guide sheet the world has ever created”.

“It’s surprising it’s not even longer,” said Miller. “It’s come out of a mangle of thousands of different conversations, pressures and calls for change that Facebook gets from governments around the world.”

It’s only natural that with a company like Facebook and a problem of this size, we turn our thoughts to a tech-based solution. But even just a cursory glance at the complexity involved in Facebook’s guidelines highlights how difficult such a solution may be. While some areas have hard and fast rules, most of the guidelines rely on moderator judgement and discretion. Facebook’s community operations team has to be able to distinguish between sarcasm, hyperbole, and genuine threats, a fine line that a computer-controlled system would struggle with.

Teaching computers to understand jokes is a problem so complex, it has its own branch of computational linguistics. While advances are being made, such as a machine learning algorithm created in early 2016 that could identify whether pictures were funny or not, it’s slow going, and still considered one of the hardest challenges of creating true artificial intelligence.

If that wasn’t enough, Facebook hasn’t been just words and pictures for a long time. More and more content shared on the platform is video, and from extremist political views to sexually explicit footage, a tech-based solution would need to include both natural language analysis and machine vision capabilities to spot content that didn’t belong on Facebook.

Facebook already uses software to attempt to prevent certain kinds of material, such as sexual imagery of children or content advocating terrorism, from appearing on its site, and it is working on AI that will help its community operations team find and remove abuse and disturbing content faster. However, the social network could be facing a problem that only good old-fashioned human judgement can solve, and it could soon be one that starts to undermine its entire platform and business model.

Array