Google Develops AI Algorithm for AI Chip Design

Google Tensor Processing Unit 3.0. (Image courtesy of Zinskauf.)

In electronic design automation (EDA), chip floor planning is about tackling constrained optimization problems within a hypercomplex and miniscule assembly. These are significant challenges that require serious engineering. Placing circuit components within a chip’s core based on a fully worked out physical design flow means engineering with the highest degree of planning and precision.

Over the past half-decade or so, hardware engineers at Google designed the Tensor Processing Unit (TPU) for AI. It’s specifically designed for processing AI algorithms and applications integrated with internal servers. The hardware engineers at Google continue their work, and it’s come full circle. Now, engineers at Google are using AI to help them improve the efficiency of chip placement for TPU chips.

If they succeed in using AI hardware and software to improve chip placement, then a cycle of AI-powered improvement can begin to repeat itself. Engineers leveraging AI software from AI hardware to improve deficiencies on AI chips like TPUs will allow more powerful AI software to help engineers improve AI hardware, and so on.

AI hardware powering AI software is a powerful tool in chip floor planning. There are hundreds to thousands of tiny components that need to be optimized in several layers in a very constrained assembly of parts. Historically, engineers optimize chip placement to reduce wire length between components to improve efficiency. Then they go through a rigorous process of simulating their designs with powerful EDA software. This is a very time-consuming process.

The length of time that chip architectures remain relevant is diminishing as the appetite of machine learning algorithms becomes more sophisticated. They are burning through chip architectures at an increasing rate, which has led to the application of machine learning algorithms to chip design. However, the application of machine learning algorithms to chip placement is generally stymied by their inability to address multiple issues simultaneously.

At Google, the bottleneck of applying machine learning algorithms to multivariate chip placement challenges was widened by applying reinforcement learning algorithms. These reinforcement algorithms were programmed to optimize the performance of designs with a built-in reward function for referencing. So, the reinforcement learning algorithm with a reward function designed hundreds of optimized chip placement configurations and would reference their performance. Based on the built-in reward function and self-evaluation that was based on the performance of each design, the reinforcement algorithm would keep “going back to the drawing board” to continuously improve the design until it created a finished and fully optimized plan for chip placement.

Bottom Line

Researchers took the fruits of the reinforcement learning algorithm’s labor and ran it through the same EDA simulation software and found that many (but not all) of the algorithm-designed chip floor plans were better than those designed by engineers. The engineers may have even learned a few new things from the reinforcement learning algorithm’s floor plans that outperformed their own.