TY - JOUR AU - Huo, Zhouyuan AU - Gu, Bin AU - Huang, Heng PY - 2021/05/18 Y2 - 2024/03/28 TI - Large Batch Optimization for Deep Learning Using New Complete Layer-Wise Adaptive Rate Scaling JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 9 SE - AAAI Technical Track on Machine Learning II DO - 10.1609/aaai.v35i9.16962 UR - https://ojs.aaai.org/index.php/AAAI/article/view/16962 SP - 7883-7890 AB - Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications. Warmup is one of nontrivial techniques to stabilize the convergence of large batch training. However, warmup is an empirical method and it is still unknown whether there is a better algorithm with theoretical underpinnings. In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training. We prove the convergence of our algorithm by introducing a new fine-grained analysis of gradient-based methods. Furthermore, the new analysis also helps to understand two other empirical tricks, layer-wise adaptive rate scaling and linear learning rate scaling. We conduct extensive experiments and demonstrate that the proposed algorithm outperforms gradual warmup technique by a large margin and defeats the convergence of the state-of-the-art large-batch optimizer in training advanced deep neural networks (ResNet, DenseNet, MobileNet) on ImageNet dataset. ER -