In my job, I get to a lot of events and see a lot of presentations, from both brands and companies on the supply side, and one subject/tech seems to find its way in to more and more of them – AI or Artificial Intelligence to give it its full name. It’s almost becoming a badge of honour to make some reference to your platform’s AI capabilities in your presentation, or to admit to the fact you’re working with IBM’s Watson platform, as an increasing number of companies are.
Companies involved in everything from programmatic advertising to digital asset management are turning to AI to do some big number-crunching to do what they do better and more intelligently.
A couple of examples. Ads for West End shows are served programmatically into the theatre review section of a newspaper website, based on the context of the content surrounding them. The ads get the DCO (Dynamic Creative Optimisation) treatment so they blend in seamlessly with the content around them, though in accordance with ASA rules of course, they carry an ‘Advertisement’ label to distinguish them from the editorial reviews. One of the beauties of this is that, in this GDPR age, it requires no targeting of individuals on a personal level – the fact they are where they are on the newspaper’s website is a clear statement of their interest in the subject.
A data management and insights platform uses AI to get an early steer on what’s trending out there in the big, wide world, or one specific part of it, in order to inform a brand’s decision as to what ingredients to use in a new brand of green tea, a good six months before the same trend showed itself in Google search analytics, enabling the brand to get a first-mover advantage.
And a Digital Asset Management system uses AI to tag visual content to reveal more about it – whether it’s a photo of a man or a woman, what they are wearing, what brands, if any, feature, what they are doing, whether they look happy or sad. Based on this analysis, which can be applied to hundreds of thousands of images in next to no time – IBM claims Watson can read 800m pages per second – the brand can get a handle on what images and content on their social and other owned and earned media is generating the most interest and positive sentiment, and use that to inform what they do in the future.
Anyone for tennis?
Only yesterday, I sat through an entertaining presentation from Andrew Canter, global CEO of the Branded Content Marketing Association, which ended with a 2-minute video of some amazing winners played at the Wimbledon tennis tournament in recent years. As a way of passing a couple of minutes at the end of a day of presentations and discussions, it was more than welcome, but I did wonder what the point was, unless the speaker had some special interest in tennis. All was revealed at the end of the video when the presenter revealed that no human had been involved in the choice of the clips used in the compilation; someone had merely instructed IBM’s Watson AI platform – IBM has been a Wimbledon tech partner for some 25 years – to put together a highlights sequence. and this was the result.
Canter then made the point that in fact, AI is at its best when it’s a combination of machine/computer, and indeed, most people who include AI in their presentations seem to throw in a line to the effect that it’s a force for good, not for bad, and that AI is not going to make you redundant, the punchline usually being a delayed “yet” at the end of the sentence.
Personally, the most honest assessment of this I have seen is from Kevin Kelly, in his book The Inevitable, which I reproduce here in full, partly because it’s as funny as it is scary. In the book, he says:
“In the coming years, our relationship with robots will become ever more complex. But already a recurring pattern is emerging. No matter what your current job or your salary, you will progress though a predictable cycle of denial again and again. Here are the Seven Stages of Robot Replacement:
1. A robot/computer cannot possibly do the tasks I do.
OK, it can do a lot of those tasks, but it can’t do everything I do.
OK, it can do everything I do, except it needs me when it breaks down, which is often.
OK, it operates flawlessly on routine stuff, but I need to train it for new tasks.
OK, OK, it can have my old boring job, because it’s obvious that was not a job that humans were meant to do.
Wow, now that robots are doing my old job, my new job is much more interesting and pays more!
I am so glad a robot/computer cannot possibly do what I do now.
You get the point. Of course, AI is in the news for the wrong reasons this morning with the release of a report compiled by seven organisations, including Cambridge University's Centre for the Study of Existential Risk and the Future of Humanity Institute, warning that AI is ripe for exploitation by rogue states, criminals and terrorists. The report identifies three specific threats – automated hacking, drones being converted into missiles, and highly convincing fake videos being used to manipulate public opinion, which Tim Maytom covered in his Viewpoint piece a couple of weeks ago.
Clearly the idea of someone with bad intent hacking into the control system of a driverless car, or fleet of driverless cars, is pretty scary, and no one should be naïve enough to think there won’t be people out there trying to do just that and other potentially catastrophic things.
I know the idea of the robots taking over the world is by no means a new one and that the threat has been voiced by many people, including some unlikely ones such as Elon Musk. But this report does move things on in seeking to do something about the threat. It calls for a number of things, but the overarching message is that the hype no longer outstrips the reality of AI, and governments and policy-makers around the world need to wake up to this fact and start preparing for it. You only have to look at the way the internet – in many ways a force for good – is routinely abused and misused to see why the people behind the report are so concerned.
You can download the report here.