Google lays out its plans for the fight against online extremism

GoogleGoogle has revealed how it plans to fight the rise of online extremism with a pledge to use additional steps to keep its platforms free from terror. This comes just days after Facebook told us how it is dealing with extremism on its platforms.

The first step that Google, and YouTube, will take is to increase the use of technology to identify terrorism-related videos. The internet giant will seek to train new ‘content classifiers’, using its machine learning research, to help it identify and remove extremist content more quickly.

Google says it will increase the number of independent experts on YouTube’s Flagger programme. The programme will be expanded with the addition of 50 expect NGOs to the 63 organisations that are already part of the programme. Furthermore, the company will expand its work with counter-extremist groups.

Furthermore, Google will take a tougher stance on videos that, despite not infringing on its policies, contain inflammatory religious or supremacist content. This content won’t be removed but will appear behind a warning, and will not be monetised, recommended or eligible for comments or user endorsements.

Finally, YouTube will look to build on its Creators for Change programme – which promotes YouTube voices against hate and radicalisation. It will do this by working with Google Jigsaw to use targeted online advertising to reach those at risk of being radicalised with anti-terrorist videos.

“Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right,” said Kent Walker, general counsel at Google. “Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.”

Array