The headline grabbing claims that human workers will be replaced by robots are misleading. Currently, there are only about 5% of jobs that could realistically be replaced by robots. There are though a million jobs that could be done better, faster and cheaper with humans and robots working together. Where is the interface between the human and the robot and what form does that interface need to take?
The notion of the ‘cobot’ – a robot designed to collaborate with human beings – is a compelling one. We know that humans and robots have completely contrasting strengths and, in that sense, they should be complementary. A £1 calculator will be able to accurately answer a complex sum faster than a professor of mathematics. A 3 year old child will more accurately recognise a face or voice than a £1,000 smart phone. Who knows what the future and the advent of affordable quantum computing will bring, but until that day, we have variations on a simple theme: robots do numbers and humans do consciousness.
This distinction between the strengths of the robot and the human isn’t new – we’ve known about this for as long as we’ve had computing machines. What is new is the collaboration between humans and robots that is one of the hallmarks of this 4th industrial revolution.
The question now becomes a simple one: where is the interface between the human and the cobot and how do we choose to skin it? Tech geeks of a certain age will be familiar with the notion of ‘skinning’. With the underlying technology remaining the same, skinning involves overlaying the graphical user interface (GUI) with a design or theme and sometimes redesigning it entirely. ‘Skinning’ feels like a good concept to use here as the interface between humans and cobots is what it is, but we have an opportunity to design how it looks and feels.
This is where the case for Augmented Reality comes to life.
Although there is an underlying need for digital skills such as coding and cybersecurity, it is unrealistic to think that most workers in industries such as manufacturing, automotive repair and logistics will be competent enough to do that work – even if that was desirable (which it might not be). Instead we need an interface that enables the human to draw on their human abilities of judgement, instinct and emotional intelligence to direct cobots, and provides the cobot with clear processing instructions.
The emergence of augmented realty, in which data and models are presented in 3D – overlaid on the real world – brings data off the 2D plane and into dimensions that the human recognizes. In this way, our basic human skills of interpreting visual data and spotting patterns and relative changes can be brought into play. The ability for us to interact with that reality opens up the opportunity of our physical directions to be translated through the interface from an analogue to a digital instruction and then to a processing instruction for the cobot.
The ability to overlay information and instructions on the physical world also enables ‘instant upskilling’. I can’t defuse a bomb in reality, but with the correct visible indicators and instructions overlaid on a bomb I could defuse one in augmented reality. The learning time and cost would be zero. It doesn’t take a great leap of imagination to see how this could apply to manufacturing, healthcare or engineering anywhere in the World. We don’t all know how to undertake specialist technical tasks, but we all know how to follow visual instructions in an emotional context and that very human trait is the one that may solve the productivity puzzle, accelerate economic and social growth in the developing world, and build a new future for us all.