Google is claiming it’s developed artificial intelligence software that can design chips in less than six hours compared with the months it would take humans to make a chip.
In a paper in the journal Nature posted by CNBC, the tech giant’s engineers said the breakthrough could have “major implications” for the semiconductor sector.
The AI has already been used to develop the latest of Google’s tensor processing unit chips that are used to run AI-related tasks.
“We showed that our method can generate chip floorpans that are comparable or superior to human experts in under six hours, whereas humans take months to produce acceptable floorpans for modern accelerators,” the authors wrote in their conclusion.
“Our method has been used in production to design the next generation of Google TPU.”
A chip’s “floorplan” involves plotting where components like CPU processors — which drive other components of the system to perform the commands of the user — graphics processing units, known as GPUs, and memory are all placed on the silicon die in relation to one another, CNBC reported. Their positioning on these tiny boards is important because it affects the chip’s power consumption and processing speed, the news outlet noted.
Google’s deep reinforcement learning system — an algorithm that’s trained to take certain actions in order to maximize its chance of earning a reward — can do it with relatively little effort, CNBC noted.
Similar systems can also defeat humans at complex games like Go and chess. In those situations, the algorithms are trained to move pieces that increase their chances of winning the game.
But in the chip scenario, the AI is trained to find the best combination of components in order to make it as computationally efficient as possible, CNBC reported.
The AI system was fed 10,000 chip floorplans in order to “learn” what works and what doesn’t, according to the paper.
While human chip designers typically lay out components in neat lines, Google’s AI uses a more scattered approach to design its chips, CNBC reported.
It isn’t the first time an AI system has gone rogue after learning how to perform a task off the back of human data.
DeepMind’s famous “AlphaGo” AI made an unconventional move against Go world champion Lee Sedol in 2016 that amazed players around the world, CNBC reported.
Facebook’s chief AI scientist, Yann LeCun, hailed the research in a Twitter thread Thursday, calling it “very nice work,” and adding “this is exactly the type of setting in which [reinforcement learning] shines.”
It also was lauded an “important achievement” that will “be a huge help in speeding up the supply chain” in a Nature editorial.
But the journal urged that “the technical expertise must be shared widely to make sure the ‘ecosystem’ of companies becomes genuinely global.”
“The industry must make sure that the time-saving techniques do not drive away people with the necessary core skills,” it added.
© 2021 Newsmax. All rights reserved.