ImageNet Training in 24 Minutes
THIS REPORT HAS BEEN WITHDRAWN
Yang You and Zhao Zhang and James Demmel and Kurt Keutzer and Cho-jui Hsieh
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2017-155
September 14, 2017
http://www2.eecs.berkeley.edu/Pubs/TechRpts/Withdrawn/EECS-2017-155.pdf
Finishing 90-epoch ImageNet-1k training with ResNet-50 on a NVIDIA M40 GPU takes 14 days. This training requires 10^18 single precision operations in total. On the other hand, the world's current fastest supercomputer can finish 2 * 10^17 single precision operations per second (Top500 list). If we can make full use of the supercomputer for DNN training, we should be able to finish the 90-epoch ResNet-50 training in five seconds. However, the current bottleneck for fast DNN training is in the algorithm level. Specifically, the current batch size (e.g. 512) is too small to make efficient use of many processors
For large-scale DNN training, we focus on using large-batch data-parallelism synchronous SGD without losing accuracy in the fixed epochs. The LARS algorithm (You, Gitman, Ginsburg, 2017) enables us to scale the batch size to extremely large case (e.g. 32K). We finish the 100-epoch ImageNet training with AlexNet in 24 minutes, which is the world record. Same as Facebook's result (Goyal et al 2017), we finish the 90-epoch ImageNet training with ResNet-50 in one hour. However, our hardware budget is only 1.2 million USD, which is 3.4 times lower than Facebook's 4.1 million USD.
Author Comments: While we believe the technical results of this paper to be accurate, overall this paper was published prematurely. We are working on a publication.