Connect The Dots

Google uses artificial intelligence(AI) to accelerate the design of chips.

190 0
Google uses artificial intelligence(AI) to accelerate the design of chips.

Google trained a machine-learning AI system to place elements in a micro chip design – and it does so much better and faster than humans, saving space and power while improving performance.

The company Google is already using the technique to integrate memory blocks into production chips, including its tensor processing units (TPUs). This week, the science journal Nature published a paper by a team led by Azalia Mirhoseini on the use of graph placement for so-called “floor planning” of chips.

Nature stated in an opinion piece that automating chip design could accelerate future hardware generations, but cautioned: “While this is a significant achievement and will significantly aid in speeding up the supply chain, technical expertise must be widely shared to ensure that the ‘ecosystem’ of companies becomes truly global.” Additionally, the industry must ensure that time-saving techniques do not result in the loss of individuals with the necessary core skills.

Better chips by design

Although silicon chips are made up of numerous components, “floorplanning,” or designing the physical layout, is a difficult task. Google demonstrated 10,000 designs that were evaluated for efficiency and performance. After six hours of work, the AI was capable of designing ships as well as or better than humans after months of effort.

The AI was programmed to view the problem as a game in which it placed pieces on a board in order to achieve a win. The game comparison is instructive: while Go has approximately 10360 configurations (its “state space”), a chip can have up to 102,500 – making it more than 102000 times more complex.

The system began with large units, or “macro blocks,” and then filled in the surrounding space with smaller cells – and the results are quite surprising. The diagram, which was derived from a Nature article, illustrates the placement of macro blocks for memory. In a, the human-designed Ariane RISC-V processor, macroblocks are neatly lined up, leaving space between for smaller components. By rearranging and grouping the memory blocks differently in b, Google’s AI improved performance and allowed for a more optimal placement of all the cells involved.

The paper was initially published as a pre-print on the Arxiv website last year, and Google’s AI head Jeff Dean discussed it during a keynote address at the International Solid-State Circuits Conference in February 2020. Following that, Google disclosed that the technique was used in chip development, specifically in the design of recent tensor processing units (TPUs).

“Our approach was used to design the next generation of Google’s artificial intelligence (AI) accelerators, and it has the potential to save thousands of hours of human effort with each new generation,” the authors write in their abstract. “Finally, we believe that more powerful AI-designed hardware will accelerate AI advancements, establishing a symbiotic relationship between the two fields.”

Leave A Reply

Your email address will not be published.

Share via
Copy link
Powered by Social Snap