Variance Penalized On-Policy and Off-Policy Actor-Critic

Authors

  • Arushi Jain McGill University, Montreal Mila, Montreal
  • Gandharv Patil McGill University, Montreal Mila, Montreal
  • Ayush Jain McGill University, Montreal Mila, Montreal
  • Khimya Khetarpal McGill University, Montreal Mila, Montreal
  • Doina Precup McGill University, Montreal Mila, Montreal Google DeepMind, Montreal

DOI:

https://doi.org/10.1609/aaai.v35i9.16964

Keywords:

Reinforcement Learning

Abstract

Reinforcement learning algorithms are typically geared towards optimizing the expected return of an agent. However, in many practical applications, low variance in the return is desired to ensure the reliability of an algorithm. In this paper, we propose on-policy and off-policy actor-critic algorithms that optimize a performance criterion involving both mean and variance in the return. Previous work uses the second moment of return to estimate the variance indirectly. Instead, we use a much simpler recently proposed direct variance estimator which updates the estimates incrementally using temporal difference methods. Using the variance-penalized criterion, we guarantee the convergence of our algorithm to locally optimal policies for finite state action Markov decision processes. We demonstrate the utility of our algorithm in tabular and continuous MuJoCo domains. Our approach not only performs on par with actor-critic and prior variance-penalization baselines in terms of expected return, but also generates trajectories which have lower variance in the return.

Downloads

Published

2021-05-18

How to Cite

Jain, A., Patil, G., Jain, A., Khetarpal, K., & Precup, D. (2021). Variance Penalized On-Policy and Off-Policy Actor-Critic. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7899-7907. https://doi.org/10.1609/aaai.v35i9.16964

Issue

Section

AAAI Technical Track on Machine Learning II