The Industry Reacts: Facebooks Content Review Rulebook Leaks

Facebook engineer code
With over 100 internal training manuals and flowcharts from Facebook’s content review team leaked and published as part of a Guardian investigation, there are renewed calls for the social media giant to act with greater transparency, and numerous conflicting opinions over how Facebook should tackle disturbing, offensive and controversial content posted on its platform.

The leak of documents comes as Facebook is already under pressure for high profile events like livestreamed murders in Cleveland, Ohio and Phuket, Thailand, as well as consistent issues of hate speech and revenge porn on the platform.

Figures just revealed by The Guardian show that the social media firm had to assess nearly 54,000 potential cases of revenge pornography and ‘sextortion’ in a single month in January, with 14,000 accounts disabled due to this kind of sexual abuse, with 33 of the cases reviewed involving children. Facebook’s labyrinthine policies regarding this kind of content do not make matters easier, with one source telling The Guardian that “sexual policy is the one where moderators make the most mistakes. It is very complex.”

On top of the questions of offensive content, sexual abuse and censorship raised by these revelations, Facebook is an increasingly dominant force in online advertising, and brand safety is a growing concern among many marketers. We reached out to a number of industry experts to ask how this latest news affects Facebook’s position as both the leading social network and a powerful advertising platform, as well as representatives from digital free speech and privacy advocates the Open Rights Group and mental health charity Mind for their perspective on events.

Facebook reporting content
Erna Alfred Liousas, social media analyst, Forrester Research

Facebook recently added 3,000 people to its moderation team signalling the complexities of this challenge. This team is learning by doing as technology isn’t at a point where can it can immediately solve these issues. Artificial intelligence solutions must be trained over time.

Global communities across social media face this issue. Facebook stands out because of its size and global reach. Yet, the onus for addressing this universal problem lies with both the companies providing the mediums for people to communicate, as well as the actual people using the platform.

Rob Thurner, founder and managing partner, Burn The Sky
Questions about how Facebook really moderates content are fascinating, but nothing new. The real issue at stake is whether Facebook can maintain the trust of its advertisers and users.

In March, Google faced similar questions about how it moderates content on YouTube, and took swift action to ensure ‘brand safety’ for advertisers by improving controls to align their brand setting on the platform with brand values. This is a formidable challenge: 400 hours of video are uploaded every minute, so digital surveillance tools are used as it’s not practical to moderate content manually.

Facebook faces the mix of challenges – foremost a lack of trust on many fronts. To its advertisers, Facebook has admitted that its video viewing claims have been exaggerated. To the competition authorities, before acquiring WhatsApp in 2014, Facebook insisted it was not technically possible to match WhatsApp users accounts with Facebook accounts. Last week, the European Commission fined the company €110m for this misleading claim.

Let’s see how long its 2bn users will trust Facebook as they realise their social media activity, messaging and location data are being linked to further improve Facebook’s targeting and boost its profits which hit $10bn last year.

Jim Killock, executive director, Open Rights Group
With almost 2b users each month, Facebook’s decisions about what is and isn’t acceptable have huge implications for free speech. These leaks show that making these decisions is complex and fraught with difficulty.

Facebook will probably never get it right but at the very least there should be more transparency about their processes. This is why plans in the Conservative manifesto that pledge to compel private companies to regulate content on the internet are problematic, and bound to chill free speech in the UK.

Gavin Stirrat, managing director, Voluum
The Times and The Guardian have brought a number of concerns regarding advertising around questionable content into mainstream media although this has long been discussed in the trade press. The focus has now shifted from Google to Facebook as they struggle with the difficult job of trying to moderate the enormous volumes of content being uploaded to their site every day.

They want to maintain their online community platform status – allowing content to be published that some people may be offended by. The challenge is doing so while also protecting brand advertisers who rightfully have high standards of what they would deem as acceptable around which they are happy for their ads to appear.

The damage that could be done based on the increased publicity around these issues is not isolated to Facebook and Google. The recent Google issues were misinterpreted by some as an issue with programmatic, and in some cases mobile, as a whole, which is not the case.

Most ad tech companies do not have direct B2C relationships nor host content. How these issues are addressed will have an impact on the perception of digital as a whole, so it is in the industry’s best interest to work together where appropriate and possible in order to find solutions.

Eve Critchley, head of digital, mental health charity Mind
Streaming people’s experience of self-harm or suicide is an extremely sensitive and complex issue. We don’t yet know the long-term implications of sharing such material on social media platforms for the public and particularly for vulnerable people who may be struggling with their mental health.

What we do know is that there is lots of evidence showing that graphic depictions of such behaviour in the media can be very harmful to viewers and potentially lead to imitative behaviour. As such, we feel that social media should not provide a platform to broadcast content of people hurting themselves.

Social media sites should do what they can to help people protect themselves from dangerous content. It is important that they recognise their responsibility in responding to high risk content. While it is positive that Facebook has implemented policies for moderators to escalate situations when they are concerned about someone’s safety, we remain concerned that they are not robust enough.

Facebook and other social media sites must urgently explore ways to make their online spaces safe and supportive. We would encourage anyone managing or moderating an online community to signpost users to sources of urgent help, such as Mind, Samaritans or 999 when appropriate.