Robots Emulate Human Learning Process with Neural Networks

Conventional robots are programmed by trained engineers to perform tasks outlined through coding or recorded motion.

Now imagine some not-so-distant future where engineers don’t need to program robots at all. What if the robots could teach themselves?

That would be an AI breakthrough to send technophobes running through the streets crying out, “Skynet is here!”

However, the reality isn’t nearly as dramatic. In fact, Google has already discovered a way to program robotic arms to teach themselves.

Using artificial neural networks, robots can emulate the biological equivalent found in the human brain and learn the same way we do, through experimentation.

In Google’s experiment, 14 robotic arms were tasked with picking up objects of various sizes and shapes, for example children’s toys and office supplies. Each robot was equipped with unique two-fingered manipulators, which they used to perform 8,000 grasping tests collectively.

The robots could identify objects through a camera, capturing monocular images. However, this type of vision lacks the same degree of depth perception as binocular vision, which presented a challenge. As a result, flat objects, like stacks of post-it-notes, occasionally slipped by the robot’s sensors.

However, the robots were able to learn from their mistakes using the neural networks. As a result, the robots refined their hand-eye coordination to pick up objects more efficiently. This was accomplished without any additional programming or interference of any kind from the researchers.

The robots moved obstacles away from their target objects, which were often adjusted to be picked up more easily from their bins.

Learning how to pick up objects in this way, similar to how toddlers learn to interact with their environment, is exciting but also presents the biggest problem with the technology – what good is a robot that can’t learn a task quickly?

For practical applications like manufacturing processes, a robotic arm needs to be operational as soon as possible. Quickly programming a robot and letting it get to work is a lot more efficient than patiently waiting for it to figure things out on its own.

Robotic arms, like those from Universal Robots, are designed for fast integration into modern manufacturing facilities.

Rather than teach robots like we teach children, how can we combine pre-programmed skills with the AI shown in these experiments to push the limits of efficiency?

If you’re not afraid of the inevitable singularity event, there is more on Google’s research into artificial neural networks and hand-eye coordination in robotics. You can read the full report here.

If you're wondering how to automate industrial processes with more conventional robotics, read on about Why You Should Automate With Industrial Robots.