Viewpoint: Hey there robot, are you sure about that choice?
- Wednesday, October 18th, 2017
- Share this article:
Over the last 12 to 18 months, businesses and organisations have really started to embrace the power of AI and machine learning, whether that be through smart speakers, chatbots or self-driving vehicles. With the rise of the machines, there is growing concern in some industries about the robots eventually taking people’s jobs – something that I briefly touched on a few months ago in regard to the journalism industry – but could the machines, at least for the time being, really be able to do this without truly human mindset?
For many years, AI has been able to beat the best players in the world at chess. More recently, an AI from Google’s DeepMind was able to beat the world’s best at Chinese abstract strategy board game Go. So, yes, the technology has already to be proven to be more intelligent than us in areas. And, of course, has already started replacing humans in some job roles due to this intelligence, efficiency and lower costs. Some even predict that 50 per cent of all current jobs could be replaced with AI in the next 10 years.
Despite the supposed bleakness of the situation, it may actually be a little while longer before the machines completely takeover a large chunk of the employment market (dont hold me to that). Why? Because the technology is still incapable feeling. Incapable of making rational, moral or ethical decisions. Yes, AI can be taught to make certain decisions, but for some of these decisions even we as humans don’t know the correct moral or ethical answer – though we’d probably still have a bigger chance of choosing the ‘right’ one.
Before delving into the idea of morality and ethics, let’s set the scene with a little bit of recent news. It’ll all make sense very soon, I promise.
Last week, Californian regulators revised the state’s rules on self-driving vehicles, deciding that automakers and tech companies could be allowed to have their cars on the roads of the Golden State without a human driver as early as next year. Meanwhile, Google parent Alphabet’s Waymo has been working with police forces in Arizona, Texas, Washington, and the aforementioned California, to train them how to respond to self-driving car crashes (sorry, road traffic accidents). Waymo has also released this handy safety report.
What would you do?
Now you’ve taken all that in, I want you to take two key points with you for the rest of our journey: ‘without human driver’ and ‘road traffic accidents’. Got it? Good.
Picture this situation: a self-driving car, carrying a family, is travelling down a road when – with no time to break – it is forced to swerve to avoid an animal that has stumbled out in front of it. Sounds simple enough, until you add in that – by swerving to avoid the animal – the car puts the family in the car at risk, as well as another group of people on the pavement. ‘What about swerving in the other direction?’ you ask. Well, that would take the car down a rocky hill, once again putting the family in the car in harm’s way. This, of course, is an extreme, but it’s clear which the best outcome would be, though all bad.
This scenario is a play on a famous philosophical one asked to many around the world involving a train. In the scenario, a train is hurtling toward a group of people tied down on the track, but there’s not enough time to break, and your only other option is to divert the train to on to another part of track where only one person is tied down. Most people, in this moral dilemma, would choose to divert the train because one dead person is better than four.
This scenario becomes a bit more difficult to provide an answer for when you have the choice of throwing one man in front of a train to stop it killing a group. Or more difficult still if you say the person on the other bit of track is now a family member or a friend, while you still don’t know any of the group.
The above are examples of moral and ethical dilemmas, though extreme cases (it’s all a bit like Saw – not entirely sure how excited I am about Jigsaw though), that even we as humans don’t quite know how to deal with, so how can we ever expect to teach a computer how to respond in situations like this?
Not sure Id do that myself
On a less horror genre-esque note, moral and ethical dilemmas could become a factor when AI is used more and more to make decisions within the workplace. In most cases, the AI within businesses will be taught to make a decision based upon what it believes is ‘best for business’ but, sometimes, this decision would only benefit a company financially in the short run and having long-lasting moral and ethical implications.
In addition to this moral and ethical decision making, there is also a problem with the development of internal biases within AI through machine learning – something that was recently spoken about by Facebook’s chief security officer, Alex Stamos. If a machine learns the wrong response to a situation, we could have a problem on our hands, and who knows I, Robot may not actually be the craziest possible outcome of that.
At the same time, I do appreciate that human morals and ethics are far from perfect (like, really far). And I do acknowledge that, in some scenarios, a machine could probably make a better decision than I or anybody else could ever make.
What I know for sure is that it will be interesting to see how tech companies deal with the idea of morality, ethics, and rational behaviour within robots, and how successful AI truly becomes in taking our jobs over the next few years. Sounds very Brexit-y, doesn’t it?