NVIDIA and FANUC Join Forces to Implement AI Robotics System

Super sophisticated GPUs will be produced by one of the world’s largest manufacturers and a robotics company known for lights-out manufacturing operations where robots build other robots.

What could possibly go wrong?

Recently, NVIDIA and FANUC Corporation announced a collaboration in Japan to implement artificial intelligence (AI) on the FANUC Intelligent Edge Link and Drive (FIELD) system. According to the press release, the goal is to “increase robotics productivity and bring new capabilities to automated factories worldwide.” That seems innocent enough. 

The FANUC canary-yellow industrial robots became famous for being workhouse juggernauts, but they weren’t known for being particularly bright. Now these giant industrial robots that make themselves as well as components for themselves are getting some deep learning parallel processing AI brains from NVIDIA. (Image courtesy of FANUC.)

So what GPU cards from NVIDIA are going to be used? The press release didn’t specify, so as of right now, we can only speculate.

History of NVIDIA GPUs for Deep Learning

In early 2015, NVIDIA CEO and Cofounder Jen-Hsun Huang announced the GeForce GTX Titan X, calling it “the most powerful processor ever” and saying that it was built specifically “for training deep neural networks.”

A few months later, NVIDIA came out swinging for enterprise deep learning applications but said that the Tesla K80 was better than the Titan X, which is more consumer oriented because it doesn’t have ECC (Error Correcting Code) protection and GPUDirect for clustering. A cluster is a system of computers that connects over a high-speed network to create high-performance computing (HPC) environments and GPUs in high-performance computing (HPC), and NVIDIA GPUs are used by the thousands to create supercomputers like the world’s fastest machine at Oak Ridge National Laboratory.

NVIDIA released the Tesla M40, which costs around four times more than the K80, and the company proclaimed it “The World’s Fastest Deep Learning Training Accelerator.”

Okay, so marketing lingo for NVIDIA’s last string of GPUs aside, what NVIDIA GPU cards are going to be used with the FIELD system in Japan?

NVIDA’s latest Pascal GPUs are pretty exciting, but now the word is out about the next-generation Volta GPUs from the company. So it could be that generation. Unveiled in the Xavier supercomputer chip for autonomous vehicles at GTC Europe 2016 a week ago, the Volta has a monstrous amount of computing power, which the cars using it will desperately need. Autonomous cars need to recognize and analyze thousands of images, interpret everything a human does behind the wheel, and take action to avoid injuring and killing passengers and bystanders. So NVIDIA’s 512-core Volta GPU better damn well be powerful.

Volta GPUs have insanely fast parallel processing and, courtesy of their NVLINK technology, higher bandwidth and incredible energy efficiency. (Image courtesy of NVIDIA.)

I suspect that the Volta GPUs will be used by FANUC in its new system. Maybe the company will utilize the Pascal GPUs for deep learning, but adding AI to the FIELD system will give robots the ability to teach themselves to do tasks faster and more efficiently. By learning together, a task that used to take a single robot eight hours to accomplish can now be done by eight robots in an hour.

What Is the FIELD System for Manufacturing?

Though it is a complicated system, one place to start breaking the FIELD system down is to understand that it is based on edge computing and GPU-enabled AI.

According to a book from Springer Publishing called Pocket Data Mining: Big Data on Small Devices, edge computing “enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.”

So in a FANUC manufacturing facility, the FIELD system will minimize the volume and cost of sharing data, while also providing a secure connection to the cloud for robotics and other sensors to collect data in a decentralized way and which will use the cloud to store it. It’s like a centralization/decentralization spigot for collecting data, so that computing power can be maximized for individual manufacturing operations among the robots.

At FANUC, robots produce other robots without the presence of humans. FANUC Robotics America Vice President Gary Zywiol once said about FANUC’s capabilities, “Not only is it lights out, we turn off the air conditioning and heat, too.” The complex houses 22 factories where a huge population of honeybee-yellow robots replicate themselves 24 hours a day, seven days a week. (Image courtesy of FANUC.)

Currently, FANUC produces between 22,000 and 23,000 computer numerical controlled (CNC) machines per month. With customers such as Tesla Motors (though it prefers KUKA robots by a long shot) and Apple, and a pledge of $1 billion in February 2015 to expand its operations, FANUC is investing heavily in itself—replicating robots as well as AI and edge computing.

So after using a “we are in the age of AI” platitude in a recent statement from NVIDIA, CEO Huang continued about the collaboration with FANUC by saying: “GPU deep learning ignited this new wave of computing where software learns and machines reason. One of the most exciting creations will be intelligent robots that can understand their environment and interact with people. NVIDIA is delighted to partner with FANUC to realize a future where intelligent machines accelerate the advancement of humanity."

According to the International Federation of Robotics, the number of industrial robots will increase by 1.6 million units since its last report in 2015, including robots like these from FANUC, ABB and KUKA. China buys the majority of industrial robots and manufactures more automobiles than any other country in the world, and FANUC is now poised to lose a bit of that share. In fact, with the slowed growth of China’s economy due to the rapid industrialization and growth of alternate manufacturing economies like India, industrial robots and co-bots are going to replace expensive human labor—no matter what the global public relations industry would have you believe.

After buying the German robotics manufacturer KUKA, Midea (which specializes in air conditioners and household appliances) is poised to compete with FANUC. But this partnership with NVIDIA is upping the stakes in the competition.

"Advances in artificial intelligence will allow robots to watch, learn and improve their capabilities," according to Kiyonori Inaba, who is a board member and general manager at FANUC. She highlighted the benefits of combining AI methodologies like deep learning, saying this about the technology: "Deep learning will also cut down the time-consuming programming of robot behavior. We are thrilled to be advancing the robotics revolution with NVIDIA."

Robotics and AI together just seems worrisome to me, but let’s explore this a little bit more.

The breakthrough that is enabling AI manufacturing like FIELD to crown on the near-future horizon is how GPU-accelerated deep learning works.

According to the press release, “FANUC will use a range of NVIDIA GPUs and deep learning software to enable AI in the cloud, the data center and embedded within devices. These next-generation capabilities will include: Deep learning robot training with NVIDIA GPUs GPU-accelerated AI inference in FANUC Fog units used to drive robots and machines.”

What Is a FANUC Fog Unit?

A “fog unit” refers to “fog computing,, which is the result of a partnership FANUC has with Cisco. Cisco’s fog computing allows FANUC robots to become “fog units.. Traditional cloud computing architecture actually causes latency in operations on a manufacturing floor, because of how much data is being sent to the cloud from each connected device.

The fog extends the cloud so that it is closer to the things that produce and act on IoT data. These devices, called fog nodes, can be deployed anywhere there is a network connection: on a factory floor, on top of a power pole, alongside a railway track, in a vehicle, or on an oil rig. Any device with computing, storage and network connectivity can be a fog node. Examples include industrial controllers, switches, routers, embedded servers and video surveillance cameras.

The FIELD system is a platform to improve factory production and efficiency with advanced AI. By combining AI and edge computing technology, the FIELD system processes the edge-heavy sensor data collected from various machines to allow the machines intelligently and flexibly collaborate to achieve advanced manufacturing capabilities.

The fog extends the cloud to be closer to the things that produce and act on IoT data. (Image courtesy of FANUC.)

All of the robots can pick up on the task or function. It’s essentially a scalable, modular deep learning platform.

“You could look at this in an automotive context. It’s painstakingly hard to program a car to know every possibility. But if you take a car and drive it hundreds of thousands of miles, and the car learns over time what to do and what not to do, then you take that data model and download it to any number of vehicles and give them an instant ability to know.”

Bottom Line

What’s going on is essentially machine learning so that a system will teach itself rather than be programmed to perform a task. Machine learning will take place on these very fast servers that are located in the factory, and once the training model is established, there is another NVIDIA GPU that is in each individual robot, and the training model is communicated to each robot so that they can pick up instructions for their individual task.