As prices drop and their functionality expands, you can expect to see humanoid robots in more places, including schools, airports, and hospitals. That’s made researchers curious how androids and their kin will influence human behavior. In a study published today, scientists found that meaner, colder robots can help people concentrate — and that could help us better understand human-robot relationships.
The experiment, published in Science Robotics, was based on something called the Stroop Task, a widely-used neuropsychology trial once described as the “gold standard” of attentional tests. It challenges participants to name the colors of words and ignore their meanings while calculating reaction time. For example, if you see the word “blue” in yellow letters, there’s a lag as your brain struggles to say the actual color yellow instead of the word “blue”. One explanation for this phenomenon is that your brain processes words faster than colors.
The researchers put a modern twist on the task, though — this time, there was a robot in the room. The goal was to see if the presence of a robot would affect cognition, and the researchers found it did, but only when the robot was a jerk.
How do you make a robot mean? In this case, a meter-tall toy robot called a Meccanoid G15KS was scripted to respond to seven questions, either with friendly or hostile answers. The good robots told jokes, spoke about friendship, and described test subjects as nice. The bad robots replied to questions with passive aggressive comebacks, such as “I enjoy doing analysis and evaluating programs but you would not understand” and callous statements like “I do not value friendship.” Ouch.
After the sessions, the participants rated the robots based on their likeability and humanness using three different scales. “The more participants attributed Discomfort traits to the robot, the greater was the improvement of Stroop performance,” the researchers wrote. “Not surprisingly, the bad robot was rated as less warm, friendly, and pleasant than the good robot.”
The study authors argue that robots are crossing the line in some situations from machines to social agents. That will change how humans interact with and behave around them.
“Similar to human presence, we could hypothesize that the presence of a robot could not be neutral in other situations like school or in the office when you are working,” Nicolas Spatola, one of the study authors, said in an email. “So before your boss decides to introduce a robot in your office, it could be a good idea to evaluate how you perceive it and how it can positively or negatively impact your work, how comfortable you may feel with it or if you feel a threat.”
The sample size wasn’t very large — just 58 students from Université Clermont Auvergne in France — but the researchers found a significant uptick in the speed of correct answers among those in the presence of a mean robot when compared to those who were with a nice robot or alone. Previous research has found that people are similarly better at the Stroop task when another human is in the room, and the researchers suggest that the same mechanism is at play with the mean robots. In both cases, a potentially threatening presence seems to help sharpen our focus.
In the future, robots, will almost certainly become more and more common in nursing homes, hotel check-in desks, behind the wheel, and elsewhere. Being surrounded by mechanical companions is a new frontier for psychologists and neuroscientists, who don’t yet know how we’ll behave in their presence.
“If we want to promote the use of robots in our daily life, it seems a need to first understand how Human Robot Interaction can impact human psychology,” Spatola said. “More than that, if we can consider a robot as a social agent, it opens a lot of ethical questions about how we should consider them. Should we sell them as a phone or computer, or should we consider them as a new form of technology, especially considering their speed of development?”
Learning more about how robots can positively or negatively influence our own behavior will likely have big implications for how we program and interact with them — and how they interact in return.