Westworld is a hell of a show, but the sense of dread it elicits is nothing new. Pygmalion sculpted a woman who came to life. Same goes with the Golem, only with mud. The amalgamated Frankenstein jolted awake to get all murderous. Humans creating life in their own image is a cornerstone of the realm of fiction.
And until recently, they’ve stayed there. But today, ever-sophisticated robots are graduating from Disneyland-style animatronics into increasingly realistic, intelligent beings. Take the famous human replicas of Hiroshi Ishiguro. Or the theatrical androids from Engineers Arts in the UK, or Sophia, the humanoid without a scalp (OK, maybe that one’s not particularly intelligent). They’re all so entrancing, it’s easy to forget how ethically problematic they could be.
Not in the homicidal Westworld sense—androids anywhere near that smart or physically capable are so far off, it’s not even worth speculating. No, more pressing are the surprising social problems that will come with realistic humanoid robots, which might work the front desk of hotels, or stand in for us at the office, or live with us as companions.
Google ran smack into an early manifestation of those problems last month, when it debuted its Duplex AI-powered voice assistant. The audio algorithm is realistic enough to fool humans into thinking it’s human—and it turns out people don’t like being tricked. Google was forced to clarify that Duplex would introduce itself first as an AI. Which kinda defeats the purpose of making a realistic voice assistant in the first place, but whatever.
Ethical stumbles like this can challenge the budding relationship between humans and physical machines, too. Take ElliQ, a robot-tablet combo that reminds the elderly to stay active while acting as a window into their family’s social media feeds. ElliQ’s designers went out of their way to remind the user they’re talking to a robot. “The voice we say has a robotic accent, so we’re not trying to hide that in a voice that’s human,” says Dor Skuler, CEO of Intuition Robotics.
ElliQ kind of looks like it has a head, but it doesn’t have eyes. A bit unsettling? Maybe. But it was a conscious choice by Intuition, because humans try to give agency to pretty much anything with eyes. For Skuler, convincing a user that an AI or humanoid robot is human is a dangerous game. “I think it creates the wrong expectation of the experience, and it’s somewhat dystopian,” he says. “I don’t think we want to live in a world where AIs pretend to be human and try to—I wouldn’t say coerce—but lead you down a path where you believe you’re talking to a human, and feel these feelings or emotions.”
Which is not to say we can, or should, stop humans from forming relationships with machines. That’s inevitable. In fact, even in beta tests with an early home robot like ElliQ, users see the robot as a “new entity in their lives,” Skuler says, rather than a device. To be sure, they know full well it’s just a machine—”and yet, there is a sense of gratitude for having something with them to keep them company,” Skuler says. (We met ElliQ last year and can confirm it’s pretty charming.)
All this from a very early and relatively simple companion robot. Just imagine the bonds that we’ll form with far more advanced machines. Say 50 years from now we’ve got realistic humanoids walking among us. They move a bit weird still, their facial expressions are a bit stiff still, so they betray themselves as machines. This journey into the humanoid future will take us straight through the uncanny valley, or the repulsion we feel when a robot is almost human, but not quite there.
But now it’s 100 years in the future, and you’re in a colleague’s office talking about TPS reports. As you’re getting up to leave, your coworker says goodbye, then goodbye again, then again. You’ve been talking not to a human, but a spot-on stand-in robot, and it’s glitching. You feel relieved to have cleared up the TPS report business, but you also feel deceived.
What you needed from the get-go was disclosure. “I say, ‘By the way, Matt, I’m an AI,’” says Julie Carpenter, a roboticist who studies human-robot interaction. “Or, ‘Julie is actually at home, and she’s teleoperating me.’” In that hypothetical scenario, the robot discloses itself as a robot, just as Google’s new voice assistant does. You might even say that the robot would be ethically required to do so, even if it ruins the artifice.
Those future ethical codes will likely vary country by country. “If you had a robot that maybe had some childlike qualities, perhaps in a shopping mall in Japan people might find that very engaging,” says Carpenter. “If you had a robot with similar childlike qualities in a shopping mall in the United States, people might find that really off-putting.” Roboticists will have to consider cultural context when they design interactions between humans and their robotic analogs.
And they’ll have to adapt to changing perspectives on those interactions, as new generations of robotic natives are born. “Kids that grow up in this world with robots are going to really be influencers in how society at large interacts with robots,” says Carpenter. “It’s really the kids that we need to watch to see what is going to be normal for them and what new norms of behaviors are they bringing into that culture.”
Ideally those norms won’t include treating humanoid robots like crap, à la Westworld. (A bad sign, though: In one experiment in Japan in 2015, kids beat an unattended robot and called it an idiot.) But if you do know an android is an android—it’s revealed itself to you—might it be tempting to walk all over them? Might the bonds we form with our creations be more along the lines of servitude than affection?
“Humans are great at developing social categories,” Carpenter says. “I treat you differently than I might treat my dentist. We go throughout our day modifying our social interactions for who we’re interacting with.” It’s not hard to see a future, then, where different types of robots get different levels of respect and affection. Your home humanoid is a beloved companion, while you can treat the front-desk humanoid with a bit less respect because, well, it doesn’t have feelings in its brain, just ones and zeroes.
What didn’t you understand about a non-smoking room, exactly, Mr. Robot?
More Great WIRED Stories
- How WIRED lost $100,000 in Bitcoin
- Four rules for learning how to talk to each other again
- Your next glass of wine might be a fake—and you’ll love it
- Maybe DNA can’t answer all our questions about heredity
- Xbox is losing the console war—but that’s a good thing
- Looking for more? Sign up for our daily newsletter and never miss our latest and greatest stories