Inefficiency of K-FAC for Large Batch Size Training

Authors

  • Linjian Ma University of California at Berkeley
  • Gabe Montague University of California at Berkeley
  • Jiayu Ye University of California at Berkeley
  • Zhewei Yao University of California at Berkeley
  • Amir Gholami University of California at Berkeley
  • Kurt Keutzer University of California at Berkeley
  • Michael Mahoney University of California at Berkeley

DOI:

https://doi.org/10.1609/aaai.v34i04.5946

Abstract

There have been several recent work claiming record times for ImageNet training. This is achieved by using large batch sizes during training to leverage parallel resources to produce faster wall-clock training times per training epoch. However, often these solutions require massive hyper-parameter tuning, which is an important cost that is often ignored. In this work, we perform an extensive analysis of large batch size training for two popular methods that is Stochastic Gradient Descent (SGD) as well as Kronecker-Factored Approximate Curvature (K-FAC) method. We evaluate the performance of these methods in terms of both wall-clock time and aggregate computational cost, and study the hyper-parameter sensitivity by performing more than 512 experiments per batch size for each of these methods. We perform experiments on multiple different models on two datasets of CIFAR-10 and SVHN. The results show that beyond a critical batch size both K-FAC and SGD significantly deviate from ideal strong scaling behaviour, and that despite common belief K-FAC does not exhibit improved large-batch scalability behavior, as compared to SGD.

Downloads

Published

2020-04-03

How to Cite

Ma, L., Montague, G., Ye, J., Yao, Z., Gholami, A., Keutzer, K., & Mahoney, M. (2020). Inefficiency of K-FAC for Large Batch Size Training. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5053-5060. https://doi.org/10.1609/aaai.v34i04.5946

Issue

Section

AAAI Technical Track: Machine Learning