Humanity is entering a confusing time: the Age of the Bizarrely Intelligent Robots. No longer confined to cages on factory floors, the machines are more and more walking, rolling, and hopping among us. But the question is, are we ready for their invasion? After all, interacting with other humans is hard enough for most of us (OK, maybe most of people who are me). How on Earth can we get along with awkward, unfeeling robots?
By being prepared, that’s how. Welcome to the world of human-robot interaction, in which people have to adapt to the machines as much as the machines have to adapt to us. We can get along perfectly fine with robots, I swear. It’s just going to take a bit of effort, especially in these early days of the revolution.
The people who really need to do the hard work are the robot manufacturers. I mean, yeah, they need to build the robots—but more critically, they need to teach humans how to set the right expectations for their new mechanical helpers. Studies show that people who interact with a conversational robot tend to expect the robot to do human things as well it can say human things. “This is just a hypothesis, but that’s probably because we tend to anthropomorphize robots a lot,” says Anca Dragan, who studies human-robot interaction at UC Berkeley. “The moment we see some evidence of human-like intelligence we generalize that to other avenues.”
Robots, of course, have nowhere near the physical capabilities of humans. They’re strong as hell and great at repetitive tasks, but they’re relatively awful at manipulation. So manufacturers of robots, especially robots meant to be companions, have to subtly telegraph that the machines are still fairly basic. Instead of having the robot speak human, for instance, you can go all R2-D2 with beeps and boops.
This implies to the user that the machines still have their limitations, and that we should treat them as such. For the time being, that means treating them like your grandparents—get out of their way and help them if they get stuck. (I’m not making this up. It’s the advice of at least one manufacturer, which makes robots that roam hospitals.)
Even when robots are better at something than humans, like driving, they’ll need to nonverbally communicate with the people around them. A robocar could speed through city streets, screeching to a halt at the last possible second as a pedestrian crossed the road ahead. “Compare that with a car that makes sure to slow down,” Dragan says, “not because it has to, but because then you’ll actually understand that OK, it turns out that this car will stop for me.” It’s the same kind of nonverbal communication you use with a human driver, subtly locking eyes from behind a windshield.
So robots need to figure out ways to engender trust. And that brings us to an ethical quandary: What happens when robots begin to take advantage of that trust? Plenty of robots, especially those meant for the home, will be mighty charming. In the near term, they’ll act more like pets, following us around and keeping us company. In the long term, they’ll get better and better at manipulating our world, doing things like lifting the elderly out of bed. That may seem affectionate, but a robot can’t genuinely return your affection; its love is a calculation, not an emotion. It’s only a matter of time before a robotmaker figures out how to exploit that nonreciprocal relationship.
This will be particularly problematic for children and the elderly, who may not understand the nature of their robotic bond. So say a kid gets attached to a particularly intelligent robotic doll. The manufacturer could exploit that bond by trying to sell the kid on a firmware update that makes the doll even more lifelike. A fantastical situation, to be sure, but not outside the realm of possibility, because capitalism.
And that’s just one of the many robots on the horizon and the many interactions we’ll have with them. So let’s think hard about how intelligent machines make us feel. Take it from a guy who once told a robot he loves it.