Viewpoint: Sex, lies and digital video
- Wednesday, February 7th, 2018
- Share this article:
In the film Doubt, a priest played by the late, great Phillip Seymour Hoffman gives a sermon about gossip. He tells a story about an old woman who gossips with a neighbour about a man she hardly knows, but then feels a terrible guilt. Going to her priest Father O’Rourke to confess and ask for forgiveness, he instructs her to take a pillowcase up to the roof and rip it open with a knife. The woman does as Father O’Rourke instructs, then returns to the church. He asks her what happened when she opened up the pillow, and the woman responds “Feathers everywhere, Father.”
“Now,” says Father O’Rourke, “I want you to go back and gather up every last feather that flew out on the wind.” The woman says that it can’t be done, the wind took them all over and she doesn’t know where they went. “And that,” says Father O’Rourke, “is gossip!”
This week has seen Twitter, gaming chat platform Discord and Pornhub all come out against the explosion of ‘deepfakes’ – videos that use artificial intelligence to convincingly map someone’s features onto another person’s body, most often for pornographic use. The three companies, along with image hosting site Gfycat, have all promised to remove any deepfakes posted on their platform, but despite relatively swift action from some of the larger tech firms, this is a problem that is just beginning.
While the practice of doctoring explicit images to resemble other people existed long before the internet, it was previously a long and exhausting process, and rarely a convincing one, especially when it came to video content. Without a Hollywood editing suite and hundreds of hours of work, the process was more or less impossible. However, in the past few months, AI tools have emerged that enable people to produce videos far more quickly and easily.
Tech news site Motherboard first reported on the phenomena in December, when a user on the internet message board Reddit began showing off the algorithm-enabled videos he had produced on his home computer. Initially, the process required a fairly advanced working knowledge of neural networks and machine learning, but at the time, experts predicted that a more user-friendly version of the technology would be available within a year or two.
That prediction turned out to be overly generous. Within two months, technically-savvy users had produced an app that would guide people through the process and distributed it on Reddit. This resulted in a huge explosion in the number of videos created, with users training neural networks using celebrity pictures and footage from films, TV shows and social media, then mapping their features onto pornographic movies.
The focus so far has been on female celebrities, as one might (unfortunately) expect. Figures including Gal Gadot, Taylor Swift, Emma Watson and Natalie Portman have all been ‘deepfaked’, with new names added to the list every day. One can only imagine how traumatic this is for the people involved, seeing their face appear in explicit material they never made, and with those behind the technology pushing for faster, more effective tools, the worst is yet to come.
Even though the technology is less than a few months old, there have already been multiple reports of people using social media to build up big enough collections of photos and footage to convincingly ‘deepfake’ former partners, a horrific new angle on the already-endemic problem of ‘revenge porn’.
And it’s not just pornographic content either – the technology can be used to doctor videos for any purpose. With the tech world and general public just beginning to understand the extent of the ‘fake news’ problem, we now live in a world where we can no longer trust the videos we see, let alone the articles we read.
Speaking to The Outline, Hany Farid, a professor of computer science at Dartmouth University, posited the “nightmare scenario of somebody creating a video of Trump saying ‘I’ve launched nuclear weapons against North Korea,’ and that video going viral, and before anyone gets around the realising it’s fake, we have full-blown nuclear holocaust. I would say I’m not prone to hysteria or exaggeration, but I think we can agree that’s not entirely out of the question right now.”
Farid, a member of the Defence Advanced Research Agency’s Media Forensics program, admits that the emergence of deepfakes blindsided the group, who couldn’t have conceived of this level of advanced digital manipulation being available en masse 18 months ago. That is, unfortunately, the speed that technology moves at now, especially when a large number of technically-minded users are all tackling the same challenge.
“The reality is that the number of people working on the forensics side, like me, is relatively small compared to the number of people working on the other side,” said Farid. “We are greatly outnumbered and out-resourced. Google is not developing forensic techniques. Facebook is not developing forensic techniques. It’s a bunch of academics. We’re outgunned.”
To bring it back to my initial reference, publishers, tech firms and brands all need to recognise that we now live in a world where both fake news and the tools used to produce it can be distributed in the blink of an eye, and once they get out there, it’s incredibly hard to get them back in the pillowcase.
As these tools proliferate and find their way into the hands of more and more bad actors, every part of the digital world, from everyday users to global brands, needs to take a firm stand and recognise that it will take a concerted effort to fight back against this tide. If we don’t act quickly and decisively, the concerns we saw last year about brand safety will be nothing compared to those coming down the line.