Engineers Create Robot That Imagines Itself

Engineers at Columbia University have created a robot that learns what it is, by itself, without any previous knowledge of physics, motor dynamics or geometry.

This is a breakthrough in robotics. Until now, robot learning has been largely limited to human-provided simulators and models—in essence, to having a human explain to the robot what it is. Robots have not learned to simulate themselves.

“If we want robots to become independent, to adapt quickly to scenarios unforeseen by their creators, then it’s essential that they learn to simulate themselves,” said Hod Lipson, professor of mechanical engineering at Columbia and director of the lab where the research was conducted.

The engineers used a four-degree-of-freedom articulated robotic arm for their experiment. At first, the machine didn’t know that it was a robot arm—it had no idea of its shape or capability of movement. Initially, the robot moved randomly, but collected data about its actions: it performed roughly a thousand trajectories, with each one having 100 points along it. The robot then used a deep learning technique to analyze its own movements, eventually arriving at a self-model.

The first self-models were highly inaccurate, giving the robot little awareness of what it looked like or how its joints were connected. However, after more than 30 hours of self-training, the self-model became increasingly consistent with the actual physical robot—within about four centimeters.

The self-model performed a task where it picked up something and set it back down in a closed loop system. At each point along the trajectory the robot was able to compare what was happening to the physical model with its self-model—and recalibrate the self-model accordingly.

Unshackling Robots: Self-Aware Machines.

Eventually, the robot was able to take hold of objects in specific locations and put them in a bin 100 percent of the time.

The researchers tested the robot in an open loop system, where it performed the task based entirely on its internal self-model without any feedback—like a human picking up an object with his or her eyes closed. The robot completed the task successfully 44 percent of the time.

The self-modelling robot was also given other tasks to perform, such as writing text with a marker. And to test whether the self-model could detect and adjust to damage to itself, the researchers 3D printed a deformed part to simulate damage. The robot was able to detect the damaged part and adjust its self-model to compensate for the damage and resume its tasks with minimal change in performance.

Lipson believes that self-imaging is vital to robots becoming more resilient and adaptive, and better able to serve humans—but he is aware that the technology needs to be handled with care.

Read more about developments in machine learning at Why Your Artificial Intelligence Thinks a Polar Bear Looks Like a Can Opener.