Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks

Authors

  • Chunyuan Li Duke University
  • Changyou Chen Duke University
  • David Carlson Columbia University
  • Lawrence Carin Duke University

DOI:

https://doi.org/10.1609/aaai.v30i1.10200

Keywords:

precondition, stochastic gradient MCMC, stochastic gradient Langevin dynamics, deep neural networks

Abstract

Effective training of deep neural networks suffers from two main issues. The first is that the parameter space of these models exhibit pathological curvature. Recent methods address this problem by using adaptive preconditioning for Stochastic Gradient Descent (SGD). These methods improve convergence by adapting to the local geometry of parameter space. A second issue is overfitting, which is typically addressed by early stopping. However, recent work has demonstrated that Bayesian model averaging mitigates this problem. The posterior can be sampled by using Stochastic Gradient Langevin Dynamics (SGLD). However, the rapidly changing curvature renders default SGLD methods inefficient. Here, we propose combining adaptive preconditioners with SGLD. In support of this idea, we give theoretical properties on asymptotic convergence and predictive risk. We also provide empirical results for Logistic Regression, Feedforward Neural Nets, and Convolutional Neural Nets, demonstrating that our preconditioned SGLD method gives state-of-the-art performance on these models.

Downloads

Published

2016-02-21

How to Cite

Li, C., Chen, C., Carlson, D., & Carin, L. (2016). Preconditioned Stochastic Gradient Langevin Dynamics for Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10200

Issue

Section

Technical Papers: Machine Learning Methods