Just imagine what you could do with a third arm.
You could sip coffee and type an email. You could scratch your nose and play Call of Duty. You could solve the Rubik’s cube and conduct a symphony. You could play Ping-Pong and knit a sweater. Clearly, we could all use a third arm.
Fortunately, giving us a hand with a third arm might be a promise science can deliver on, like, soon. That’s because engineers at the Advanced Telecommunications Research Institute in Japan recently showed off a brain-machine interface that allows them to complete two different tasks using their fleshy arms in concert with a third, mind-controlled robotic arm.
While cloning ourselves for the sake of productivity is out of the question for quite some time, a third arm may suffice in the meantime for the multitaskers among us.
Mind, Body, Robot
Here’s something you wouldn’t have said just 10 years ago: Mind-controlled prosthetic limbs are old news. Indeed, today there are a handful of variations on “smart” arms that flex and bend in response to neural activity in the brain. A person wears a device that monitors their brain activity, and sends commands when they imagine “flexing” or “bending.” By linking specific patterns of brain activity with corresponding actions, it’s possible to control the device with thoughts.
After losing his arm to cancer in 2005, Johnny Matheny of Florida is said to be the first person to live with a mind-controlled robotic arm. He’s a cyborg. Go ahead and say it.
Mind-controlled prosthetics have drastically improved in the years since, and researchers are looking at ways to not only help amputees, for example, but also augment the abilities of otherwise healthy people. And that presents a new challenge.
You see, when a person uses a mind-controlled prosthetic limb as a replacement for a missing arm, they are singularly focused on accomplishing an action with that arm. They need it to bend, so they imagine bending. They need it to grasp, so they imagine grasping. Essentially, your brain is sending a single, clear signal to the mind-reading device.
In this case, researchers Christian Penaloza and Shuichi Nishio added a layer of complexity. Rather than perform a single action, they wanted to see if people could perform two very different tasks at the same time using their biological arms and a robotic arm. That means the mind-reading device must decipher two very different arm motion signals coming from the brain.
It needs to know what actions are meant to be performed by the robotic arm, and which actions are intended for the other two arms. And, based on their early results, it appears the researchers’ brain-machine interface was up to the task.
They challenged 15 participants to grasp a bottle with the robotic arm, and, at the same time, balance a ball atop a board with their two hands. Eight of the 15 participants were able to successfully multitask with their third arm, which isn’t too shabby. Researchers found that there were some people who were consistently good performers, and others who just couldn’t get it.
The researchers published their findings Wednesday in the journal Science Robotics.
While it’s not clear why some people could multitask and others couldn’t, there’s a good chance it has nothing to do with the technology. Researchers think people’s poor performance is simply because they don’t excel at doing two things at once. The people who struggled to control a third limb might also have trouble patting their heads and rubbing their bellies at the same time.
Drummers, the coolest of multitaskers, might have no trouble working with a robotic arm in addition to their two limbs.
As these mind-controlled robotics continue to evolve, one can only imagine the future demands employers will place on potential hires: Must have Bachelor’s degree, three years experience, proficient in Microsoft Word, and able to gracefully control auxiliary robotic appendages with their mind.