Facebook reveals the measures it is taking to remove terrorist content

Facebook CEO Mark ZuckerbergFacebook has revealed how it intends to fight the spread of extremist content on its platform, as pressure from governments continues to grow on social media companies for being apparent ‘safe spaces’ for terrorists.

Facebook has made it clear on more than one occasion that terrorism is not tolerated on its platform but is also well aware of how difficult it is to police a platform used by nearly 2bn people each month.

“Our stance is simple: There’s no place on Facebook for terrorism. We remove terrorists and posts that support terrorism whenever we become aware of them. When we receive reports of potential terrorism posts, we review those reports urgently and with scrutiny. And in the rare cases when we uncover evidence of imminent harm, we promptly inform authorities,” said Facebook.

“We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do.”

One of the approaches that the social network is taking is the use of AI. Facebook is using the technology to remove content deemed to have terrorist implications – by getting its systems to identify a variety of red flags.

AI is being implemented in image matching, to stop the spread of copies of terrorist photos videos; language understanding, which detects similar posts to ones previously removed praising or supporting terror groups; removing pages, groups, posts or profiles supporting terrorism; detecting new accounts from previous offenders; and cross-platform detection across all of Facebook’s platforms including WhatsApp and Instagram.

For where the algorithms can’t do the work, or can’t understand context, Facebook still has to rely on human input.

Humans come into play when content or accounts are reported by Facebook users. These reports, which cover all policy violations, a reviewed 24 hours a day across various languages. The content reviewing team currently is made up of 4,500 people but Facebook says it will add 3,000 more staff over the next year. Despite this, recently leaked documents outlining the guidelines followed by these staff members received backlash for some of the content they are told to keep online. At the same time, the team is meeting hate speech removal targets set out by the EU.

In addition to the content review team, Facebook has grown its team of counterterrorism specialists to more than 150 people that are exclusively or primarily focused on countering terrorism. Facebook also has a team that responds to emergency requests from law enforcement.

On top of the use of AI and human staff, Facebook has also developed several partnerships with other companies, civil society, researchers, and governments.

Elsewhere, Facebook’s work to build a safe online community has seen it make changes to its Safety Check feature. The social network has introduced fundraisers, expanded its community help to desktop, enabled a personal note to be added to a shared Safety Check update, and introduced crisis descriptions from its third party global crisis reporting agency NC4.

Array