Viewpoint: What YouTubes latest scandal can teach us about automation

So here we are again. Major brands like Adidas, Cadbury and Deutsche Bank have pulled advertising from YouTube, following a controversy last week about the way the video platform is being used for sexual predation on children. It was revealed that ads for these brands were appearing against content either posted by paedophiles, or that featured explicit comments aimed at children posting videos of themselves.

This is an undeniably horrific situation, but it’s also a familiar one. Flash back to the start of the year, and the last glut of headlines about advertisers boycotting YouTube – in that case, over ads showing alongside videos from terrorists. Or back to the last time YouTube was accused of failing its youngest users, earlier this month, when a blog post exposed the creepy and often genuinely inappropriate videos being targeted at kids.

This, it seems, is the background radiation of the modern internet.

Look at the most recent controversy involving Facebook, and the role its platform played in Russian attempts to interfere with last year’s US election, through ads and fake news stories which spread because they fitted Facebook’s remit for shareable content. Jumping back to YouTube, look at the inevitable follow-up to the paedophilia story, as its autofill search was found to be suggesting ‘s*x with your kids’ (the asterisk serving to circumvent YouTube’s content filters).

The algorithm problem
When you look at all these stories together, it seems like the problem is simple. Coming from platforms centred around user-generated content, they’re an inevitable side effect of the enormous scale at which Facebook and Google operate. After all, they can’t be expected to individually vet every ad and piece of content that goes through their respective platforms. 300 hours of video are uploaded to YouTube every minute – by my maths, in order to just watch all the videos going up, you’d need nearly 20,000 people working around the clock.

Our solution to that problem can be summed up in a single word, one that has been given almost magical power in this industry: Algorithms.

But in many of the examples I’ve mentioned above, algorithms are actually contributing to the problem. Many of the creepiest YouTube ‘kids’ videos exposed by James Bridle’s blog post are the result of what he calls ‘keyword salad’, mashing together the top search terms to take advantage of the search and discovery algorithms used by YouTube. And it cuts both ways – it’s been suggested that the aforementioned incestuous autocomplete results may also have been the result of people gaming the search algorithms in a specific effort to embarrass YouTube.

Man vs machine
The temptation here is to separate the problem and solution into two disparate halves. On one side, you have algorithms and machines, which are efficient and precise to a fault. On the other, you have humans – unpredictable and creative, but imbued with a common sense that means they can recognise when something is off. But of course, the divide isn’t that clean.

As human beings, we have a natural affinity for systems. Create a set of rules – which, ultimately, is what any algorithm is – and people will internalise them, and look for a way to bend them so they get the maximum output from the minimum input. That’s how gambling works, it’s how you get ‘whales’ spending terrifying amounts on in-app purchases; it’s ultimately why capitalism as a whole is such a successful system.

And according to none other than Tim Berners-Lee, inventor of the worldwide web, “the system is failing”. Google and YouTube have created algorithms that reward misuse, whether it’s creating potentially disturbing children’s videos or spreading fake news even if you don’t have a political agenda, because they generate clicks.

As Berners-Lee told The Guardian earlier this month: “The way ad revenue works with clickbait is not fulfilling the goal of helping humanity promote truth and democracy. So I am concerned.”

Sins of the developer
So, perhaps humans are a little more like machines than we might like to admit. What about the other way around?

This is where it starts to get really troubling, because algorithms very often carry the biases of the people who created them, or of the data they are processing. And those biases can be very ugly indeed.

Last May, ProPublica found that Compas, a computer program used by a US court, was twice as likely to mistakenly flag up black defendants as potential reoffenders than white. Similarly, HRDAG has shown how PredPol, used to identify crime hotspots, could get stuck in a feedback loop of over-policing neighbourhoods with a majority non-white population.

Moving back into the realm of social media, there was Microsoft’s infamous chatbot, Tay. The bot used machine learning to hold natural-language conversations with Twitter users – and was pulled the day after it launched due to the offensive content of many of its tweets. Or the Seattle Times report which showed that LinkedIn’s search suggestions were correcting many female names, in favour of male ones.

Facing the consequences
These are problems that, at a high level, the tech industry is aware of and is working to address, even when they don’t involve the loss of millions in ad dollars. Google set up its PAIR (People and AI Research) initiative in July, to study the interaction between humans and machines and make sure that the technology “benefits and empowers everyone”.

This is exactly the kind of initiative that’s needed, but it seems to be tackling the tech problem, rather than the human one. It’s probably impossible to deal with the real root of the issue: the inherent thought processes in the people behind the technology. As Silicon Valley actor Kumail Nanjiani‏ tweeted earlier this month, of his conversations with developers: “We are realizing that ZERO consideration seems to be given to the ethical implications of tech.”

What does this mean going forward? Well, taking the absolute narrowest view, it means that this probably isn’t the last time advertisers will pull spend from YouTube, as we discover new problems stemming from the combination of sheer scale of content, unpredictable human behaviour, and algorithms that can enable and even encourage abuse of the system. We can probably expect a lot more controversies like the ones mentioned above.

To some extent, this is just the cost of doing business. If you want the kind of scale that these platforms can offer, and the kind of opportunities for targeting that come with social media, then you have to accept all the problems – not just to your own business interests, but potentially on a broader social level – that come along with it.

This is a bigger issue than just brand safety or transparency, though both are tangled up in it. As a result, its much harder to fix – but this doesnt absolve us of responsibility.  We can keep working on the machine side of the equation, but ultimately the only solution is for humans, whether were developers or users or markerers, to be more considerate of our actions, and the consequences that can snowball when working at the kind of scale that Facebook and YouTube offer.

Array