TY - JOUR
AU - Qiu, Yixuan
AU - Wang, Xiao
PY - 2020/04/03
Y2 - 2022/05/17
TI - Stochastic Approximate Gradient Descent via the Langevin Algorithm
JF - Proceedings of the AAAI Conference on Artificial Intelligence
JA - AAAI
VL - 34
IS - 04
SE - AAAI Technical Track: Machine Learning
DO - 10.1609/aaai.v34i04.5992
UR - https://ojs.aaai.org/index.php/AAAI/article/view/5992
SP - 5428-5435
AB - <p>We introduce a novel and efficient algorithm called the stochastic approximate gradient descent (SAGD), as an alternative to the stochastic gradient descent for cases where unbiased stochastic gradients cannot be trivially obtained. Traditional methods for such problems rely on general-purpose sampling techniques such as Markov chain Monte Carlo, which typically requires manual intervention for tuning parameters and does not work efficiently in practice. Instead, SAGD makes use of the Langevin algorithm to construct stochastic gradients that are biased in finite steps but accurate asymptotically, enabling us to theoretically establish the convergence guarantee for SAGD. Inspired by our theoretical analysis, we also provide useful guidelines for its practical implementation. Finally, we show that SAGD performs well experimentally in popular statistical and machine learning problems such as the expectation-maximization algorithm and the variational autoencoders.</p>
ER -