Effects of Momentum in Implicit Bias of Gradient Flow for Diagonal Linear Networks

Authors

  • Bochen Lyu DataCanvas University of Southampton
  • He Wang UCL Centre for Artificial Intelligence, Department of Computer Science
  • Zheng Wang University of Leeds
  • Zhanxing Zhu University of Southampton

DOI:

https://doi.org/10.1609/aaai.v39i18.34118

Abstract

This paper targets on the regularization effect of momentum-based methods in regression settings and analyzes the popular diagonal linear networks to precisely characterize the implicit bias of continuous versions of heavy-ball (HB) and Nesterov's method of accelerated gradients (NAG). We show that, HB and NAG exhibit different implicit bias compared to GD for diagonal linear networks, which is different from the one for classic linear regression problem where momentum-based methods share the same implicit bias with GD. Specifically, the role of momentum in the implicit bias of GD is twofold: (a) HB and NAG induce extra initialization mitigation effects similar to SGD that are beneficial for generalization of sparse regression; (b) the implicit regularization effects of HB and NAG also depend on the initialization of gradients explicitly, which may not be benign for generalization. As a result, whether HB and NAG have better generalization properties than GD jointly depends on the aforementioned twofold effects determined by various parameters such as learning rate, momentum factor, and integral of gradients. Our findings highlight the potential beneficial role of momentum and can help understand its advantages in practice such as when it will lead to better generalization performance.

Downloads

Published

2025-04-11

How to Cite

Lyu, B., Wang, H., Wang, Z., & Zhu, Z. (2025). Effects of Momentum in Implicit Bias of Gradient Flow for Diagonal Linear Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19242-19250. https://doi.org/10.1609/aaai.v39i18.34118

Issue

Section

AAAI Technical Track on Machine Learning IV