Think of a robot and you will probably picture something that looks a bit like us. From faithful companions like C-3P0 and Wall-E to the rather less friendly Cybermen, most robotic creations of popular culture have some recognisable human features.
Robots have been assembling cars for decades and are now starting to enter our homes. In fact, these useful machines usually have little resemblance to humans or other living creatures. This lack of familiar features may not be a problem when the robot is merely vacuuming your floor, but would you trust a robot to drive your car, cook your dinner or take care of your children?
Building robots to look like us does seem an obvious way to make them appear more trustworthy. “We tend to project our own attributes and behaviours onto humanlike robots” says Dr Jeremy Goslin of Plymouth University’s School of Psychology.
Social robotic engineers and psychologists at Plymouth University have studied our trust in robots using games in which experimental participants negotiate the prices of household objects with different types of robots. These experiments show that people do indeed trust the judgments of humanlike robots over those which are less recognisably modelled on human forms, even if the two types of robot behave in the same way.
This seems a problem given that our future lives are not likely to be filled with smiling robot faces with big friendly eyes. “Many upcoming robot roles are not well suited to a humanoid form,” says Dr Goslin. “We wanted to find out how to increase our trust in useful robots, even if they look a bit odd”.
The team found that when someone’s first interactions were with humanlike robots, like the iCub, they were subsequently also more trusting in interactions with less humanlike robots, like Scitos G5 (see pictures), as effect they call “anthropomorphic priming.”
Conversely, if people first interacted with the less humanlike robots they subsequently showed reduced trust even to humanoid robots. “First impressions are important, even with robots,” say research student Debora Zanatto. “If we initially associate a robot with humanlike traits, then we are willing to extend this trust to other robots, even if they do not look or act as we do”.
Anthropomorphic priming has been proposed as a method for increasing trust of robot-assistive devices being developed to help in the care of the elderly. These have so far had a poor response from users because of the lack of humanlike features.
In science fiction, of course, humanoid robots are not always benign. We should be reassured that terrifying cyborgs like the Terminator, indistinguishable from humans but rather less destructible, will not be popping round to do the ironing any time soon.
Research funded by the US Air Force and presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction.
Zanatto, D., Patacchiola, M., Goslin, J. and Cangelosi, A. (2016). Priming Anthropomorphism: Can the credibility of humanlike robots be transferred to non-humanlike robots?. In The Eleventh ACM/IEEE International Conference on Human Robot Interation (pp. 543-544). IEEE Press.