YouTube removed 7.8m videos between July and September

YouTube has released the latest metrics from its Community Guidelines report, giving an update on brand safety and detailing how the platform is removing violative content.

In a blog post, YouTube revealed that between July and September 2018, it removed 7.8m videos, 81 per cent of which were first detected by machines. Of those detected by machines, 75.5 per cent had not received a single view, leaving just over 3m that had been viewed.

In the post, YouTube said it has always used a mix of human reviewers and technology to address violative content on our platform, but that in 2017, it started applying more advanced machine learning technology to flag content for review by its teams. This combination of smart detection technology and highly-trained human reviewers, it said, has enabled it to consistently enforce our policies with increasing speed.

Last April, YouTube launched a quarterly YouTube Community Guidelines Enforcement Report, which as of today, is being expanded to include additional data such as channel removals, the number of comments removed, and the policy reason why a video or channel was removed.

The post also explains the process for removing videos. When YouTube detects a video that violates its Guidelines, it removes the video and applies a strike to the channel, terminating the channel if it is dedicated to posting content prohibited by its Community Guidelines, or contains a single egregious violation, like child sexual exploitation.

The vast majority of attempted abuse comes from bad actors trying to upload spam or adult content: over 90 per cent of the channels and over 80 per cent of the videos removed in September 2018 were removed for violating YouTube’s policies on spam or adult content. YouTube adds that over 90 per cent of videos uploaded in September 2018 and removed for Violent Extremism or Child Safety had had fewer than 10 views.

Looking at comments, between July and September of 2018, YouTube’s teams removed over 224m comments for violating its Community Guidelines, the majority for spam. YouTube notes that the total number of removals represents a fraction of the billions of comments posted on YouTube each quarter.

As with videos, YouTube uses a combination of smart detection technology and human reviewers to flag, review, and remove spam, hate speech, and other abuse in comments.

It has also built tools that allow creators to moderate comments on their videos. For example, creators can choose to hold all comments for review, or to automatically hold comments that have links or may contain offensive content. Over 1m creators now use these tools to moderate their channel’s comments.

Array