Yang You and Jing Li and Sashank Reddi and Jonathan Hseu and Sanjiv Kumar and Srinadh Bhojanapalli and Xiaodan Song and James Demmel and Cho-Jui Hsieh

EECS Department, University of California, Berkeley

Technical Report No. UCB/EECS-2019-103

June 21, 2019

http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-103.pdf

Training large deep neural networks on massive datasets is very challenging. One promising approach to tackle this issue is through the use of large batch stochastic optimization. However, our understanding of this approach in the context of deep learning is still very limited. Furthermore, the current approaches in this direction are heavily hand-tuned. To this end, we first study a general adaptation strategy to accelerate training of deep neural networks using large minibatches. Using this strategy, we develop a new layer-wise adaptive large batch optimization technique called LAMB. We also provide a formal convergence analysis of LAMB as well as the previous published layerwise optimizer LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB for BERT and ResNet-50 training. In particular, for BERT training, our optimization technique enables use of very large batches sizes of 32868; thereby, requiring just 8599 iterations to train (as opposed to 1 million iterations in the original paper). By increasing the batch size to the memory limit of a TPUv3 pod, BERT training time can be reduced from 3 days to 76 minutes. Finally, we also demonstrate that LAMB outperforms previous large-batch training algorithms for ResNet-50 on ImageNet; obtaining state-of-the-art performance in just a few minutes.


BibTeX citation:

@techreport{You:EECS-2019-103,
    Author= {You, Yang and Li, Jing and Reddi, Sashank and Hseu, Jonathan and  Kumar, Sanjiv and  Bhojanapalli, Srinadh and  Song, Xiaodan and Demmel, James and Hsieh, Cho-Jui},
    Title= {Large Batch Optimization for Deep Learning: Training BERT in 76 minutes},
    Year= {2019},
    Month= {Jun},
    Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-103.html},
    Number= {UCB/EECS-2019-103},
    Abstract= {Training large deep neural networks on massive datasets is very challenging. One promising approach to tackle this issue is through the use of large batch stochastic optimization. However, our understanding of this approach in the context of deep learning is still very limited. Furthermore, the current approaches in this direction are heavily hand-tuned. To this end, we first study a general adaptation strategy to accelerate training of deep neural networks using large minibatches. Using this strategy, we develop a new layer-wise adaptive large batch optimization technique called LAMB. We also provide a formal convergence analysis of LAMB as well as the previous published layerwise optimizer LARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of LAMB for BERT and ResNet-50 training. In particular, for BERT training, our optimization technique enables use of very large batches sizes of 32868; thereby, requiring just 8599 iterations to train (as opposed to 1 million iterations in the original paper). By increasing the batch size to the memory limit of a TPUv3 pod, BERT training time can be reduced from 3 days to 76 minutes. Finally, we also demonstrate that LAMB outperforms previous large-batch training algorithms for ResNet-50 on ImageNet; obtaining state-of-the-art performance in just a few minutes.},
}

EndNote citation:

%0 Report
%A You, Yang 
%A Li, Jing 
%A Reddi, Sashank 
%A Hseu, Jonathan 
%A  Kumar, Sanjiv 
%A  Bhojanapalli, Srinadh 
%A  Song, Xiaodan 
%A Demmel, James 
%A Hsieh, Cho-Jui 
%T Large Batch Optimization for Deep Learning: Training BERT in 76 minutes
%I EECS Department, University of California, Berkeley
%D 2019
%8 June 21
%@ UCB/EECS-2019-103
%U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-103.html
%F You:EECS-2019-103