Move Over Moore’s Law, Make Way for Huang’s Law
Graphics processors are on a supercharged development path that eclipses Moore’s Law, says Nvidia’s Jensen Huang
An exuberant Jensen Huang, who gave a keynote and popped up on stage during various events at Nvidia’s 2018 GPU Technology Conference (GTC) held in San Jose, Calif. last week, repeatedly made the point that due to extreme advances in technology, graphics processing units (GPUs) are governed by a law of their own.
“There’s a new law going on,” he says, “a supercharged law.”
Huang, who is CEO of Nvidia, didn’t call it Huang’s Law; I’m guessing he’ll leave that to others. After all, Gordon Moore wasn’t the one who gave Moore’s Law its now-famous moniker. (Moore’s Law—Moore himself called it an observation—refers to the regular doubling of the number of components per integrated circuit that drove a dramatic reduction in the cost of computing power.)
But Huang did make sure nobody attending GTC missed the memo.
Just how fast does GPU technology advance? In his keynote address, Huang pointed out that Nvidia’s GPUs today are 25 times faster than five years ago. If they were advancing according to Moore’s law, he said, they only would have increased their speed by a factor of 10.
Huang later considered the increasing power of GPUs in terms of another benchmark: the time to train AlexNet, a neural network trained on 15 million images. He said that five years ago, it took AlexNet six days on two of Nvidia’s GTX 580s to go through the training process; with the company’s latest hardware, the DGX-2, it takes 18 minutes—a factor of 500.
So Huang was throwing a variety of numbers out there; it seems he’s still working out the exact multiple he’s talking about. But he was clear about the reason that GPUs need a law of their own—they benefit from simultaneous advances on multiple fronts: architecture, interconnects, memory technology, algorithms, and more.
“The innovation isn’t just about chips,” he said, “It’s about the entire stack.”
GPUs are also advancing more quickly than CPUs because they rely upon a parallel architecture, Jesse Clayton, an Nvidia senior manager, pointed out in another session.