Aggregated Gradient Langevin Dynamics


  • Chao Zhang Zhejiang University
  • Jiahao Xie Zhejiang University
  • Zebang Shen University of Pennsylvania
  • Peilin Zhao Tencent AI Lab
  • Tengfei Zhou Zhejiang University
  • Hui Qian Zhejiang University



In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling. We investigate the nonasymptotic convergence of AGLD with a unified analysis for different data accessing (e.g. random access, cyclic access and random reshuffle) and snapshot updating strategies, under convex and nonconvex settings respectively. It is the first time that bounds for I/O friendly strategies such as cyclic access and random reshuffle have been established in the MCMC literature. The theoretic results also indicate that methods in AGLD possess the merits of both the low per-iteration computational complexity and the short mixture time. Empirical studies demonstrate that our framework allows to derive novel schemes to generate high-quality samples for large-scale Bayesian posterior learning tasks.




How to Cite

Zhang, C., Xie, J., Shen, Z., Zhao, P., Zhou, T., & Qian, H. (2020). Aggregated Gradient Langevin Dynamics. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6746-6753.



AAAI Technical Track: Machine Learning