Meet Blue: The Cheap and Manipulative (in a Good Way) Robot

link to original article

In a tiny lab at UC Berkeley, next to the whirring 3D printers on the wall, in front of an old Persian-rug-patterned couch, stands Blue the robot. It’s a pair of bulky humanoid arms—only with pincers for hands—attached to a metal stand. Wielding a pair of VR motion controllers, I wave my arms around, and Blue follows me faithfully. It’s my own robotic doppelgänger, kind of like the human-piloted, monster-fighting bots of Pacific Rim, only way cheaper.

That’s the beautiful thing about Blue. Research on robots has for decades been hamstrung by extravagant costs—the popular research robot PR2, a pair of arms not dissimilar from Blue, will set a lab back $400,000. Blue’s reliance on 3D-printed components puts its price tag much lower, at just $3,000 in materials per arm, and the goal is to get the total cost, with manufacturing at scale, to $5,000 per arm. If Blue’s creators have their way, that price point will launch the robot into research stardom, forging a future in which Blue’s descendants do our dishes, fold our laundry, and pick up around the house. And who knows, maybe one day they’ll fight giant monsters making a mess of San Francisco.

Project Blue

Historically, if you wanted to operate a robot arm, you had to keep humans far, far away, lest the machine fling them across the room. That’s why industrial robots have been literally kept in cages. But robots have been getting a lot better at sensing their world, in particular reacting to human contact by stopping before they hurt us. This has led to a boom in collaborative robotics, where humans work right alongside machines.

“That’s worked pretty well for a lot of existing robots,” says UC Berkeley mechanical engineer David Gealy, who leads the Blue project. “But the challenge is you take an expensive industrial robot, and then you add sensors and feedback control to it and make it even more expensive.”

The author pilots and (temporarily) breaks the robotic system by getting in the way of the VR motion sensors.

Blue, on the other hand, isn’t particularly sensitive to human touch. Instead, it’s elastic, in a sense. As I pilot the arms around, Gealy can push on them, and the arms give way a bit instead of shutting down. This is because the robot’s relatively cheap motors are “backdrivable,” meaning a human can grab the arms and move them around even when the machine is powered off.

Being on the cheaper side, the motors aren’t supremely accurate. Blue won’t hold its own against an assembly robot that has to, for example, put a tiny screw in place over and over. But Blue is accurate enough for the tasks it will need to perform.

Those tasks will involve exploring the frontier of how robots grasp, manipulate, and interact with all kinds of objects. “This robot is designed for the assumption that in the future, robots will be controlled much more intelligently by AI systems that use visual feedback, that use force feedback, much like how humans control their own arms,” says UC Berkeley’s Pieter Abbeel, a robotics researcher who’s overseeing the project.

Project Blue

Say you want Blue to learn to fold a towel. For a sensitive collaborative robot, that might be a tough task, because bumping into the surface of the table might trigger it to stop. But being particularly flexible, Blue can put force on the table when reaching for the towel without freaking out. This is how we humans do it, and how we want future machines to do it as well: We first eyeball an object, then combine that vision with a sense of touch as we begin to manipulate the object. We don’t bump into something unexpected and then shut down—we adapt and feel our way through the world.

The thing is, being super cautious isn’t ideal for either us or the machines. If you’re afraid of bumping up against the table, folding a towel gets a whole lot more difficult. “If something is totally safe, it’s not useful,” says UC Berkeley roboticist Stephen McKinley, Blue’s cocreator. “If you think about the environment we live in every day, most of the objects we interact with are not safe unless they’re useless. Everything is out there to hurt you if you want to actually fulfill a function.” Bicycles and cars are two obvious examples.

The trick with robots is to mitigate that danger, which is a matter of getting them to interact more effectively with the objects in their world. One perk of a $5,000 Blue is that labs could buy several of the robots and run learning tasks on them in parallel, speeding up the rate at which their understanding of the world improves.

“Unlike children, where each has to learn their own way, with robots you can have the same brain for all of them,” says Abbeel. One robot might stumble upon a solution quicker than the others, then share that knowledge, making learning that much more efficient. Plus, because Blue is tough, researchers can push it harder than they would a pricier machine that’s more sensitive to the world around it.

“The price point is amazing,” says Brown University roboticist Stefanie Tellex. “Like, whoa. It really opens up the availability of manipulator robots to a much broader audience. $5,000, that’s two laptops.”

Roboticists’ gain may eventually be humanity’s gain, if Blue can help push robotic manipulation research forward. Giant monsters in San Francisco Bay, take note.

More Great WIRED Stories