Deep Bayesian Quadrature Policy Optimization

Authors

  • Ravi Tej Akella Purdue University
  • Kamyar Azizzadenesheli Purdue University
  • Mohammad Ghavamzadeh Google Research
  • Animashree Anandkumar Caltech
  • Yisong Yue Caltech

Keywords:

Reinforcement Learning

Abstract

We study the problem of obtaining accurate policy gradient estimates using a finite number of samples. Monte-Carlo methods have been the default choice for policy gradient estimation, despite suffering from high variance in the gradient estimates. On the other hand, more sample efficient alternatives like Bayesian quadrature methods have received little attention due to their high computational complexity. In this work, we propose deep Bayesian quadrature policy gradient (DBQPG), a computationally efficient high-dimensional generalization of Bayesian quadrature, for policy gradient estimation. We show that DBQPG can substitute Monte-Carlo estimation in policy gradient methods, and demonstrate its effectiveness on a set of continuous control benchmarks. In comparison to Monte-Carlo estimation, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) a consistent improvement in the sample complexity and average return for several deep policy gradient algorithms, and, (iii) the uncertainty in gradient estimation that can be incorporated to further improve the performance.

Downloads

Published

2021-05-18

How to Cite

Akella, R. T., Azizzadenesheli, K., Ghavamzadeh, M., Anandkumar, A., & Yue, Y. (2021). Deep Bayesian Quadrature Policy Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6600-6608. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16817

Issue

Section

AAAI Technical Track on Machine Learning I