Tories consider fines for social networks for moderation failings
- Tuesday, May 30th, 2017
- Share this article:
The UK government has said they would increase pressure on social media companies regarding their role in online extremism, and is open to handing out financial penalties – should it come out on top in the upcoming general election on 8 June.
According to security minister Ben Wallace, the Conservatives are open to considering fines for social networks or a change in the law to encourage more action from them.
Speaking to Pienaar’s Politics, Wallace referred to the recent leak of Facebook’s moderation guidelines, branding the social network’s rules surrounding the bullying of children as ‘unacceptable’, and accused social media companies of using algorithms to make money by encouraging people to keep watching ‘unhealthy’ content.
The role of social networks and the need for them to do more to supress extremist content was a key topic of discussion during the G7 meeting in Sicily last week.
At the summit, prime minister Theresa May said: “Make no mistake, the fight is moving from the battlefield to the internet. In the UK, we are already working with social media companies to halt the spread of extremist material, and hateful propaganda, that is warping young minds. But I am clear that corporations can do more. Indeed, they have social responsibility to now step up their efforts to remove harmful content from their networks.
“Today, I called on leaders to do more. We agreed a range of steps the G7 could take to strengthen its work with tech companies on this vital agenda. And ministers will meet soon to take this forward.
“We want companies to develop tools to identify and remove harmful materials automatically. And, in particular, I want to see them report this vile content to the authorities and block the users who spread it. And the G7 will put its weight behind the creation of an international, industry-led, forum where new technologies and tools can be developed and shared to help us deny terrorists their pernicious voice online.”
Facebook has been working hard to police the content on its site, recently announcing that it would hire 3,000 more moderators to monitor content on the social network. The company is also working on developing AI to automatically detect harmful content, as May mentioned in her speech. However, it is questionable how well either of these developments at Facebook will actually do in dealing with the vast array of content on the site.