Dropout Reduces Underfitting
Oscar Xu
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2023-70
May 8, 2023
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-70.pdf
Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate =underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient. This helps counteract the stochasticity of SGD and limits the influence of individual batches on model training. Our findings lead us to a solution for improving performance in underfitting models - early dropout: dropout is applied only during the initial phases of training, and turned off afterward. Models equipped with early dropout achieve lower final training loss compared to their counterparts without dropout. Additionally, we explore a symmetric technique for regularizing overfitting models - late dropout, where dropout is not used in the early iterations and is only activated later in training. Experiments on ImageNet and various vision tasks demonstrate that our methods consistently improve generalization accuracy. Our results encourage more research on understanding regularization in deep learning and our methods can be useful tools for future neural network training, especially in the era of large data.
Advisors: Trevor Darrell
BibTeX citation:
@mastersthesis{Xu:EECS-2023-70, Author= {Xu, Oscar}, Title= {Dropout Reduces Underfitting}, School= {EECS Department, University of California, Berkeley}, Year= {2023}, Month= {May}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-70.html}, Number= {UCB/EECS-2023-70}, Abstract= {Introduced by Hinton et al. in 2012, dropout has stood the test of time as a regularizer for preventing overfitting in neural networks. In this study, we demonstrate that dropout can also mitigate =underfitting when used at the start of training. During the early phase, we find dropout reduces the directional variance of gradients across mini-batches and helps align the mini-batch gradients with the entire dataset's gradient. This helps counteract the stochasticity of SGD and limits the influence of individual batches on model training. Our findings lead us to a solution for improving performance in underfitting models - early dropout: dropout is applied only during the initial phases of training, and turned off afterward. Models equipped with early dropout achieve lower final training loss compared to their counterparts without dropout. Additionally, we explore a symmetric technique for regularizing overfitting models - late dropout, where dropout is not used in the early iterations and is only activated later in training. Experiments on ImageNet and various vision tasks demonstrate that our methods consistently improve generalization accuracy. Our results encourage more research on understanding regularization in deep learning and our methods can be useful tools for future neural network training, especially in the era of large data.}, }
EndNote citation:
%0 Thesis %A Xu, Oscar %T Dropout Reduces Underfitting %I EECS Department, University of California, Berkeley %D 2023 %8 May 8 %@ UCB/EECS-2023-70 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2023/EECS-2023-70.html %F Xu:EECS-2023-70