Do me a favor and grab and object near you. Anything will do. Even if it’s something you’ve never handled before, odds are your brain automatically worked out how you should grasp the thing and with what force. It’s the kind of clever dexterity that makes you human. (You are human, I hope?)
Ask a robot to do the same and you’ll either get a blank stare or a crumpled object in the cold, cold grasp of a machine. Because robots are good at repetitive tasks that require a lot of strength, but they’re still bad at learning how to manipulate novel objects. Which is why today a company called Embodied Intelligence has emerged from stealth mode to fuse the strengths of robots and people into a new system that could make it far easier for regular folk to teach robots new tasks. Think of it like a VR videogame—only you get to control a hulking robot.
If you want to teach a robot to do something like pick up a wrench, you can do it one of several ways. The first is to just brute-program it with all the movements it needs to grip the thing. Lines of code, one after the other. Very dull and very laborious.
(L-R) The Embodied Intelligence Founding Team: Peter Chen (CEO), Pieter Abbeel (President and Chief Scientist), Rocky Duan (CTO), Tianhao Zhang (Research Scientist).
A newer, more sophisticated technique is called reinforcement learning. At UC Berkeley, the lab that spun out Embodied Intelligence employs a robot named Brett, which can teach itself to put a square peg in a square hole by guessing. Each time it makes a random movement that gets the peg closer to the hole, the AI receives a reward. Try after try, the robot inches closer and closer to its goal until boom, it’s taught itself to master a children’s game over the course of 10 minutes.
So brute-programming is inflexible, and reinforcement learning from scratch is time consuming for the robot. This, after all, is a physical machine bound by the laws of the physical universe, so it can only make so many attempts in a given amount of time. (Using reinforcement learning in a simulation is far speedier, since the virtual trials and errors can happen much more rapidly.)
More on Robots
A more precise technique is called imitation learning, in which an operator demonstrates for a robot how to put a square peg in a square hole. That’s as easy as joysticking the robot’s arms around—but that robot won’t be able to teach itself novel tasks.
What Embodied Intelligence has dreamed up is a hybrid system of imitation and reinforcement learning. Using a VR headset and controllers, a human can teleoperate the robot to do a certain task. This creates a more natural kinetic connection between the operator and robot, as machine learning algorithms—trained to match what the human does—guide the robot’s motions. Then the reinforcement learning kicks in, refining the robot’s movements with trial and error until it’s even better at its job than the human taught it to be.
“Typically you want your robots to be superhuman, you don’t want them to be just as good as the human who demonstrates,” says Pieter Abbeel, co-founder and president of Embodied Intelligence. “You want them once they’ve acquired a skill to make that skill even faster, more accurate, more reliable through their own trial and error without humans continuously in the loop. Because humans won’t be able to demonstrate motions that are as fast as a robot could in principle move.”
Imagine, if you will, the factory of the future. Instead of some poor programmer coding each robot to do a different task on the assembly line, they would instead demonstrate the movement in VR. The robots may be a bit awkward at first, but over time they’ll use their AI to hone their motions. And as researchers build better and better learning algorithms, the robots might take one particular task a human has taught them and use it to teach themselves how to accomplish something different.
Still, this system is in its very early days. At the moment it’s working on a PR2 research robot, which is relatively slow and clumsy. And any modern robot is nowhere near as dexterous as a human, so even though this thing is great at replicating an operator’s movements, it can’t replicate fine grasping. But if Embodied Intelligence has its way, manufacturers could soon stock factories with robots that learn from humans, then supercharge those powers by teaching themselves.
And just imagine what more than one robot can achieve with this kind of system. If you’ve got 100 machines talking to each other in the cloud, and one learns something particularly useful, it could then distribute that knowledge to its compatriots. Now we’re talking about a potentially powerful hive mind. And the robots don’t even have to be of the same shape and size. Researchers have already figured out how to get this knowledge to translate between different types of machines.
In the nearer term, the idea is to not only make robots smarter, but to make them easier for people to teach. Programming Brett in the lab takes a lot of time and also something called a PhD, neither of which most people have. “What we are seeing here instead is anyone who can use a VR headset can teach a robot new skills quickly,” says Peter Chen, co-founder and CEO of Embodied Intelligence. This is the kind of democratization that will make robotics—traditionally far less accessible a field than software, which anyone with a computer can tinker with—really take off.
Will this in turn make it easier for robots to replace people in the workforce? Sure, maybe. But more and more we’re seeing robots working alongside humans, taking over tiresome, repetitive tasks and freeing up workers to do uniquely human tasks that require a keen sense of touch, for instance. And if we want any hope of making this a fruitful relationship, we’ll need our robotic coworkers to learn quickly, lest they become a burden instead of a blessing and hit us on the head with wrenches.