Is Artificial Intelligence Already Streamlining Its Own Supply Chain?

We’re now in a phase of computing increasingly characterized by engineering complex systems of automation by applying several new and convergent technologies including: deep learning, cloud computing and massive arrays of data-producing sensors.

The hardware at the center of deep learning applications for industrial automation are new types of specialized graphics processing units (GPUs). These are being designed by a few companies, but none more notably than Santa-Clara-based NVIDIA.(Image courtesy of NVIDIA.)

NVIDIA’s deep learning GPUs have been at the center of partnerships or agreements with a multitude of companies and customers. For example, NVIDIA partnered with U.S.-based Tesla Motors to provide deep learning GPUs for autonomous driving, Japan-based FANUC Robotics to create a deep learning “factory manager” to oversee manufacturing robots that manufacture other manufacturing robots, U.S.-based Amazon Web Services to provide a web platform to create deep learning applications, China-based Hikvision to power the company’s sophisticated systems of video surveillance, and Japan-based Komatsu’s SMARTCONSTRUCTION mining and construction equipment vehicles.

If you stop to take account of how many deep learning partnerships NVIDIA has entered into with powerful commercial entities in various sectors of global industry, it quickly becomes overwhelming. But how does NVIDIA and its manufacturing partners apply deep learning along the supply chain that produces NVIDIA GPUs?

How does NVIDIA automate processes using its own deep learning technology—like this DGX-1—along its supply chain? (Image courtesy of NVIDIA.)

If you were NVIDIA, wouldn’t you try to automate as many processes as possible using the deep learning technology you developed at any points you could along your own supply chain? The answer for at least one part of NVIDIA’s supply chain is a definite yes. According to Murali Gopalakrishna, who heads up product management for intelligent machines at NVIDIA:

“They are using deep learning in the manufacturing process of NVIDIA’s product. For example, we are using deep learning for industrial inspection of our PCB [Printed Circuit Board] and its components. We go out of our way to use deep learning for industrial projects—otherwise what would be the point?”

The idea behind this article is to explore where else this “hardware brain” of deep learning technology is active in aiding and improving areas along “its own” supply chain. Since human beings are still integral to the manufacturing process in the semiconductor industry (for now), a better question might be: How is artificial intelligence (AI) aiding humans in the creation and replication of its own “brain”?

Since the supply chain for NVIDIA involves a lot of intellectual property, many lines of inquiry would of course end with stonewalling, but it isn’t hard to imagine how deep learning could easily be implemented at every stage of NVIDIA’s manufacturing supply chain.

What’s a GPU?

Usually the “brain” of a computer refers to a computer’s central processing unit (CPU). However, in the nascent age of artificial intelligence, there’s a new “brain” in town: the GPU. The GPU is key in powering deep learning, the subset of machine learning associated with AI that relies on learning data representations and not just task-specific algorithms.

There are two other primary elements to deep learning AI aside from the physical “brain” hardware that is the GPU. These are necessary to accomplish anything with deep learning.

The Three Core Elements of Deep Learning

The creation, evolution and convergence of three elements are responsible for the global acceleration of AI:

  1. Deep learning algorithms
  2. Storage and access to massive sets of data
  3. GPU computation

The third element, the GPU, is the hardware component of deep learning.

What makes the GPU so important to deep-learning AI?

Without a doubt, the GPU is central to both narrow AI (a system that acts intelligently in a specific domain), and strong AI (a system that acts intelligently in a general domain).

The reason for this is due to the ability of GPUs to process similar problems in parallel. This is called “parallel processing”, and it was pioneered for processing simple geometry problems at the same time, or in parallel.

In contrast, a CPU processes math problems and other operations in a linear fashion. It solves one problem per core and moves on to the next one. The function of a core is to receive instructions and perform calculations. Software engineers write these sets of instructions as part of a program.

Processors can have a single core or multiple cores. A processor with two cores is called a dual-core processor and one with four cores is called a quad-core processor. Processors for home computers can even have six or eight cores. The more cores a processor has, the more computations it can make at the same time.

As engineers and designers, the supply chain of anything you make may or may not be well-known to you. If you agree that the GPU is the most important hardware component driving deep-learning AI, then you are probably aware that Santa Clara-based NVIDIA is leading the world in the design, manufacturing and production of deep learning GPUs.

Deep learning GPUs could be thought of as the primary visual cortex of AI, responsible for processing received visual information, whether it is a huge data set of images, or 3D scanned reality data. Nowhere is this analogy more applicable than in the example of autonomous cars.

As NVIDIA CEO Jensen Huang said during his 2018 CES keynote, “We're now able to use software to write software by itself in ways that we can never imagine, and achieve results that you're now starting to see. AI is going to revolutionize so many industries. One of the most important industries it is going to revolutionize is transportation. AV, Autonomous Vehicles powered by deep learning AI, is going to revolutionize driving, no doubt.”

Uber and NVIDIA announced a partnership at the same conference, one where NVIDIA deep learning tech would be applied to Uber’s evolving fleet of autonomous vehicles. However, you feel about the human versus driverless car debate, the fact remains: a pedestrian was struck and killed in Arizona recently by a driverless Uber car, igniting a firestorm of debate over the relative safety of autonomous computer “drivers” versus human drivers. Only time will tell how this pans out, though NVIDIA is surely intent on continuing to innovate and capitalize with companies like Uber and Tesla in this sector.

The GPU Supply Chain

The GPU supply chain begins from raw material provided by the Earth. Mining of silica sand is first, then there are several stages of refining, followed by a long and intricate manufacturing process, and finally the GPU goes through quality control and inspection phases with extremely strict guidelines before it is finally packaged and sold to NVIDIA customers.

Since NVIDIA employs a fabless manufacturing strategy, deconstructing the supply chain is a matter of relatively simple detective work. At NVIDIA—which has more than 40 offices in countries all over Europe, the Americas, Asia and the Middle—Chip Operations is the division responsible for designing their GPUs, and Board Operations designs the PCBs that house electronic components in a computer.

NVIDIA uses Taiwan Semiconductor Manufacturing Company (TSMC) to etch out the complicated design onto silicon wafers. These are then sent to BYD and Foxconn, who complete the manufacturing of NVIDIA-branded devices.

So, let’s break down the steps that go into creating an NVIDIA GPU, and see where deep learning could be used to improve efficiencies.

Stage 1: Mining Raw Silica Sand from Beneath the Earth at Cape Flattery

Japan-based Mitsubishi bought the planet’s biggest silica sand mine in 1977. The Cape Flattery Silica Mine in Queensland, Australia is the largest silica sand mine in the world, holding an estimated 200 million tons of 99.93 percent pure silicon dioxide, two million tons of which is exported from the mine every year for use in glass, foundry and chemical industries.

Though there are no NVIDIA GPUs powering autonomous mining vehicles at Cape Flattery’s Silica Sand mines, Komatsu has provided mining company Rio Tinto with 73 driverless vehicles for 24/7 iron-ore mining in nearby Western Australia. And Caterpillar announced that it will retrofit self-driving technology to old Komatsu and Caterpillar vehicles for another mining company in Western Australia named Fortescue Metals Group. (Image courtesy of Cape Flattery Silica Sand Mine.)

Komatsu manufactures construction and mining equipment in Japan. At GTC Japan 2017, plans were announced to integrate NVIDIA’s deep learning technology into Komatsu’s construction and mining equipment—specifically focusing on the use of the NVIDIA Jetson platform to create a new sophisticated system with Komatsu’s partners Skycatch Inc. and OPTiM Corp. NVIDIA GPUs will communicate with drones from Skycatch, Inc., which will collect 3D images to map terrain and visualize site conditions. OPTiM Corp. produces IoT-management software and its role is to build an application to aggregate, interpret and send terrain information to onsite workers and their industrial equipment.

Powered by NVIDIA Maxwell arc, the Jetson TX2 (USD 499) has an external SD card storage port, integrated antennas for WiFi connectivity, a 5-megapixel MIPI CSI camera, a forced air heat sink and a variety of connectors (including HDMI, Gigabit Ethernet, USB-A, micro USB, as well as SATA data and power). (Image courtesy of NVIDIA.)

But wait, what mining equipment does Cape Flattery Silica Sand Mine use?

According to Garry (Bart) Bartholdt, General Manager of the Cape Flattery Silica Sand mine, “We only use Loaders in our mining process and we currently have four Caterpillar 908G Wheel Loaders and two Komatsu WA500-7 Loaders.” When asked if the mine had any plans for incorporating driverless mining equipment, Bartholdt replied, “Not at this stage.”

How Silica Sand is Mined from Cape Flattery

Mining begins after Mitsubishi receives approval from traditional owners (indigenous people), and obtains an environmental license from the Queensland government, which happens after flora and fauna surveys and drilling surveys of dunes are performed. After vegetation and a 300 mm layer of topsoil are processed for seed removal (regeneration of plant life), silica sand is removed by front-end loaders and transported by a slurry line or conveyor belt to a washing facility.

In 2012, the mine’s two Komatsu WA 500 Loaders were purchased by Cape Flattery Silica Mine general manager Garry Bartholdt (see Komatsu’s “Down To Earth” publication, page 56), and could easily be retrofitted with autonomous driving technology. They may also be replaced soon, because silica sand is known to degrade industrial equipment at a rapid pace (especially tires). After a thorough washing and filtering process, graded silica sand is transported to the wharf to be loaded on barges for export.

Stage 2: From Sand to Silicon Substrate to Wafers

Since the world’s largest silica sand mine is owned by Mitsubishi, it’s safe to assume that silica sand from Cape Flattery in Australia makes its way to Japan and is used by SUMCO (owned in equal parts by Mitsubishi Materials Corporation and Sumitomo Metal Industries) to manufacture silicon wafers which are then bought by TSMC.

Taiwan, Japan, South Korea and the Philippines are the top importers of silica sand from Australia. The world’s largest semiconductor foundry, Taiwan Semiconductor Manufacturing Company (TSMC), produces an owned capacity of 12 million wafers per year out of Taiwan. It buys silicon wafers from semiconductor manufacturers like F.S.T., GlobalWafers, S.E.H., Siltronic, and SUMCO. SUMCO is a joint venture between Mitsubishi Materials Corporation and Sumitomo Metal Industries. (Image courtesy of SUMCO.)

Once the mined, washed and preprocessed silica sand reaches SUMCO in Japan, it is smelted and refined into polycrystalline silicon, increasing the purity to nearly 100 percent (99.9999999999 percent) purity.

Prior to the silica sand’s journey into brains of deep learning GPUs like those produced by NVIDIA, it is transformed by silicon foundries like SUMCO into disk substrates referred to as silicon wafers derived from a method of crystal growth known as the Czochralski process. In this process, monocrystalline silicon is grown by first melting high-purity sand (like the silica sand mining and processed at Cape Flattery Silica Mine) at 1425 degrees Celsius in a quartz crucible. (Image courtesy of SUMCO.)

The walls of the crucible melt into the mixture, which adds oxygen in a small concentration. Though oxygen is considered an impurity, incorporating impurity atoms to molten silicon is an intentional process known to silicon manufacturers as doping. Besides the oxygen that melts in from the walls of the quartz crucible, precise amounts of other dopant impurity atoms like phosphorus and boron are added to the molten silicon to change it into two types of silicon that have different electronics attributes.

A seed crystal is mounted to a highly calibrated rod and dipped into the melted silicon. Under precisely controlled conditions, the seed crystal is rotated and pulled upward at the same time in an inert atmosphere (filled with argon) or an inert quartz chamber. A large cylindrical ingot is extracted from the molten silicon with a perfect silicon lattice structure where transistors are later fitted in. (Image courtesy of Siltronix.)

Using either an annular saw or wire saw, the single silicon crystal is sawn into thin discs, with widths of around .1 to .2 mm.

SUMCO could not be reached or wouldn’t respond to requests for information about using deep learning GPUs in their silicon wafer manufacturing process.

But what about the next stages of production? From SUMCO, the silicon wafers make their way to TSMC, where the fabrication of integrated circuits on newly synthesized silicon wafer takes place. 

The silicon wafer manufacturing process requires a great deal of precision, but according to researcher Darko Stanisavljevic from the VIRTUAL VEHICLE Research Center in Graz, Austria, who researches the potential yield of efficiency to semiconductor manufacturing after applying machine learning at different stages of production, the process is ripe with opportunities to use deep learning to streamline and automate various steps, partly because of the sheer amount of data produced.

Stanisavljevic said, “The [semiconductor manufacturing] process typically consists of more than 500 steps. All those steps in semiconductor fabrication in fab are monitored, thus generating immense amounts of data. In recent years, all the fabrication equipment is delivered with equipment/production sensors. Although the real-time monitoring of production is possible, the amount of generated data is so overwhelming that the timely detection of production faults is difficult. Machine learning techniques can be seen as very useful tools for pattern discovery in large datasets. It is important to state the fact that there is no single ML technique or algorithm optimal for all the problems in manufacturing.”

Stage 3: Manufacturing and Assembly of Branded GPU Devices

TSMC buys silicon wafers from SUMCO and is responsible for the intricate manufacturing of NVIDIA’s circuit designs. TSMC wasn’t really willing to respond to inquiries about the methodology by which they manufacture NVIDIA products. However, there was a huge amount of attention paid to AI (specifically machine learning) during the 2017 TSMC Open Innovation Platform. TSMC explored the use of machine learning to “apply path-grouping during P&R to improve timing and Synopsys ML adoption to predict potential DRC hotspots,” according to SemiWiki.com.

TSMC’s new 12nm process in action. Though it was supposed to have been rolled out as a 4th-generation optimization of a 16nm process, the company instead decided to make it an independent process technology. (Image courtesy of TSMC.)

In the above manufacturing process, circuit structures are projected through a mask using UV photolithography onto the silicon wafer. The exposed parts of the resist coating are soluble and are then removed. The transferred circuit structures act as a template to guide for etching away billions of current switch structures. After this, billions of transistors transferred from the UV photolithography mask stage move in the process of ion implementation, where the electrical properties of each transistor are specified.

There are many more steps to this highly intricate manufacturing process, and proprietary information about the design and manufacturing of NVIDIA’s deep learning GPUs is obviously unavailable.

Stage 4: Manufacturing, Assembly

Foxconn is a Chinese manufacturing company that manufactures finished electronics like NVIDIA GPUs and Apple products. Andrew Ng, who started the Chinese tech giant Baidu, started a new venture called Landing.AI, which is working with Foxconn and other companies to bring deep learning to complex manufacturing and assembly facilities.

As components for GPUs and other products get smaller and smaller, it gets harder for human workers to detect anomalies. Deep learning technology can be used, like it is for NVIDIA, for inspection purposes in factories like Foxconn. (Image courtesy of Cult of Mac.)

According to Ng, “AI is more accurate compared to humans. An AI system also automatically detects small particles and scratches of delicate camera lens units. The data is captured and summarized on a dashboard. AI technology is extremely scalable and can be quickly deployed across a company's production lines. Many computer vision systems are trained on massive data sets. AI will power adaptive manufacturing, automated quality control, predictive maintenance and much more. This is an exciting time for the manufacturing industry and we're working with some of the top manufacturers globally to deploy AI solutions and transform their businesses.”

This new DGX-2, presented by CEO Jensen Huang at the recent NVIDIA GDC 2018 conference, was manufactured using TSMC’s 12nm Field Effect Transistor (FFT) process. Once it is assembled at Foxconn, it will be packaged and held in shipping warehouses before being sent to the customer. (Image courtesy of NVIDIA.)

When NVIDIA GPUs are finally finished, they are packaged and held in different shipping warehouses all over the world, then mailed to customers and retailers. They will then be used to power the next generation of AI technology, which may ultimately be used to design future GPUs or harvest silica or manufacture more GPUs.

Bottom Line

As the rest of this year continues to unfold, it is certain that GPU-driven artificial intelligence and engineering will begin to affect nearly every industry on Earth. This will eventually impact the work and careers of our readership. The wide-scale intertwining of AI with global industry is now an eventuality along an inexact length of time—but it is crystal clear that it is no longer a hypothetical. And though the impact will be limited in some areas and profound in others, the time to improve your understanding is today, tomorrow and yesterday.

NVIDIA isn’t wasting any time helping people utilize their deep learning GPUs for other types of processes, which is why they created the NVIDIA Inception Program for startups. If you’re lucky enough to be accepted, your company is privy to a great deal of resources to help you innovate.

Nathan Schuett, Founder & CEO of Prenav was accepted to the program. According to Schuett, “Our focus is a digitizing physical infrastructure, things like dams, bridges, powerplants, tunnels, cell phone towers with a combination of laser scanning, drones and machine learning. We use a laser scanner to help map the environment, as well as custom technology to guide drones to take photos of it, and then we analyze those photos and 3D model them with machine learning and other algorithms, to build the reports on the visual condition of a given infrastructure. Most of our development efforts to date revolve around our goal of building an automated system that can accomplish that task. The vision is to press a button, and in an instant, capture very high-resolution photographic model of the structure. Then press another button, you can interpret that data and draw conclusions from it.”

If hustling and frugal startups are making huge waves using NVIDIA’s GPU stack, you can bet that nearly every entity along the GPUs own supply chain will have some measure of improvement from incorporating deep learning GPUs like those designed and sold by NVIDIA.

And though a lot of information about manufacturing processes in the semiconductor industry along the supply chain are black-boxed, pay walled or otherwise inaccessible, it isn’t hard to see how deep learning GPUs will be applied to the means of producing itself more autonomously, and at an ever-increasing rate.