IBM: “AI is not a replication of the human mind”
- Monday, February 27th, 2017
- Share this article:
Every year there’s a new technology which seems to dominate MWC, with seemingly constant announcements and big queues at any stand showing it off. At previous shows that role has been filled by VR, the connected home, wearables, and this year – surprising no one – it seems to be AI.
AI is currently undergoing a renaissance, with Amazon’s Alexa as its mainstream-friendly face, or rather voice, but the retail giant was notably absent from this morning’s ‘Artificial Intelligence: Chatbots and Virtual Assistants’ conference session. In its place were Google, talking about its rival offering, Assistant; SK Telecom, which admitted that it has been lucky Amazon hasn’t entered the Korean market, giving it space to develop a localised answer to Echo, Nugu; and the company which seems to have put its full weight behind the technology, IBM.
While the company doesn’t have any stake in consumer-facing offerings like Echo, as IBM fellow Rob High acknowledged on stage, it wants to be the engine that powers those experiences, through its cognitive system Watson. And it’s worth noting the terminology there – ‘cognitive system’ not ‘AI’.
“Cognitive computing – what is often called AI – is not a replication of the human mind,” said High, who is also VP and CTO of IBM Watson. “It’s a set of intelligence capabilities that provide strength and leverage to our human mind. That fill in gaps where we have limitations.”
The difference, he said, is that “cognitive computing is about amplifying human cognition”.
“We as human beings possess a huge amount of cognitive capability, to learn and to leverage things we have learnt in new situations – but there are a lot of limitations to our capabilities. There’s only so much we can read in a given day, only so much we can recall, only so much we can assimilate.”
A key part of this distinction is how IBM sees AI – sorry, cognitive computing – being used.
“Today most of our experiences with voice assistants are centred around tasks that need to be performed,” said High. “How do we bring that to a deeper level? How do we enable a conversation to occur?
“How do we help people produce the next new idea? How do we help them work on it, mould and shape that idea so that after the conversation we end up with something better than we started with?”
High does see value in the kind of tasks that AI assistants currently focus on – “systems that allow us to offload some of the mundane things,” as he put it – as a way to help achieve the loftier goal of “expanding our minds”.
“If it takes more than seven seconds to get an answer to a question, we lose track of where we were. Even Google search tends to be too slow,” said High. “So by having instantaneous access to information, that enables us to deepen our concentration in an unprecedented way, and actually increase our intelligence.”
Another vital part of the idea-enhancing process, according to High, is understanding how the context an idea is developed in – everything from recent experiences to temperature to background noise – impact on the idea itself. We’re not there yet, not by a long way, but it’s part of the technology’s development.
“Everything that we’re seeing today is just the beginning,” said High. “We’re going to see many more advancements.”
Among these advancements are broadening the channels where cognitive computing can be accessed, beyond phones and speakers to encompass TVs, cars and even entire homes. High spoke too about expanding the “nodes of interaction” to incorporate not just voice but gestures, and “better acuity and understanding of what we mean with tone of voice and cadence”. Another important development, as it was with mobile and the internet before it, will be commoditisation and standardisation.
The key thing to look for among all these developments, though, according to High? “Improvements that help us do the things that we as humans do well, even better.”