Effectiveness of Constant Stepsize in Markovian LSA and Statistical Inference

Authors

  • Dongyan (Lucy) Huo Cornell University
  • Yudong Chen University of Wisconsin-Madison
  • Qiaomin Xie University of Wisconsin-Madison

DOI:

https://doi.org/10.1609/aaai.v38i18.30028

Keywords:

RU: Stochastic Optimization, ML: Reinforcement Learning, RU: Probabilistic Inference

Abstract

In this paper, we study the effectiveness of using a constant stepsize in statistical inference via linear stochastic approximation (LSA) algorithms with Markovian data. After establishing a Central Limit Theorem (CLT), we outline an inference procedure that uses averaged LSA iterates to construct confidence intervals (CIs). Our procedure leverages the fast mixing property of constant-stepsize LSA for better covariance estimation and employs Richardson-Romberg (RR) extrapolation to reduce the bias induced by constant stepsize and Markovian data. We develop theoretical results for guiding stepsize selection in RR extrapolation, and identify several important settings where the bias provably vanishes even without extrapolation. We conduct extensive numerical experiments and compare against classical inference approaches. Our results show that using a constant stepsize enjoys easy hyperparameter tuning, fast convergence, and consistently better CI coverage, especially when data is limited.

Published

2024-03-24

How to Cite

Huo, D. (Lucy), Chen, Y., & Xie, Q. (2024). Effectiveness of Constant Stepsize in Markovian LSA and Statistical Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20447-20455. https://doi.org/10.1609/aaai.v38i18.30028

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty