Could visual recognition technology be the answer to Facebook’s Live video problems?

FacebookFacebook has come under pressure on several occasions over the past year, since the launch of its Live service, due to some of the acts that have been broadcast live through the feature. This pressure has reached its highest point yet, over the last two weeks, with the livestreaming of two brutal murders, in two different continents. Should Facebook shutdown the service? Or should it look to technology to prevent these actions being shown to the world?

The latest of these murders occurred in Phuket, Thailand. A man, named Wuttisan Wongtalay, 20, livestreamed the murder of his 11-month-old daughter on the rooftop of a deserted hotel in two separate clips. Wongtalay would later commit suicide, but this did not appear on video.

The clips remained accessible to users for around 24 hours, amassing a few hundred thousand views, before being taken down. Some people also uploaded the videos to YouTube, but the platform moved with far more haste than Facebook, claiming to have removed videos within 15 minutes of reports.

This murder came just days after the murder of Robert Goodwin Sr. was streamed live on Facebook by the man dubbed the ‘Facebook killer’, Steve Stephens, in Cleveland, Ohio, USA. This led Facebook to come out and admit they had taken too long to remove the three videos linked to the murder. It took Facebook more than 2 hours, from the time of the first video, to remove it.

Along with the promise of reviewing its reporting system, Facebook also said it as working on technologies to help keep its platform safe.

These technologies include AI, which Facebook is working on using to detect if a Live broadcast may be offensive or violent. The problem with this is: do machines currently have the capability to pull off such a task?

It is important to note that murder also isn’t the only problem that Facebook Live has experienced. There’s been suicides, sexual assault and torture too – can technology cover all of these areas and successfully identify these unpleasant videos?

Well, Google Cloud developed a Visual Intelligence API, that it revealed last month. The machine learning API automatically recognises objects in videos and makes them searchable. This enables developers to build apps that can extract parts of a video, as well as tagging scene changes.

This Google technology is very much the first step in the journey toward what Facebook wants to achieve. However, the technology can only detect objects and not the mood of a video. Furthermore, the University of Washington found the technology could be easily deceived, and thus doesn’t actually work too well.

Then you look toward companies like Clarifai and GumGum, which both put massive emphasis on visual intelligence. Companies like these use computer vision to get more out of images and videos, and technologies like theirs could be key in the development of technology to identify offensive or violent content.

Whichever way you look at it, the technology has a long way to go, and Facebook may be best suited to pulling the plug on their Live service for the time being to avoid more controversies on its platform. However, with one in five Facebook videos being a livestream, that may be a decision the social network will struggle to reach.