Bayesian Distributional Policy Gradients

Authors

  • Luchen Li Imperial College London
  • A. Aldo Faisal Imperial College London

DOI:

https://doi.org/10.1609/aaai.v35i10.17024

Keywords:

Reinforcement Learning, Neural Generative Models & Autoencoders, Adversarial Learning & Robustness, Bayesian Learning

Abstract

Distributional Reinforcement Learning (RL) maintains the entire probability distribution of the reward-to-go, i.e. the return, providing more learning signals that account for the uncertainty associated with policy performance, which may be beneficial for trading off exploration and exploitation and policy learning in general. Previous works in distributional RL focused mainly on computing the state-action-return distributions, here we model the state-return distributions. This enables us to translate successful conventional RL algorithms that are based on state values into distributional RL. We formulate the distributional Bellman operation as an inference-based auto-encoding process that minimises Wasserstein metrics between target/model return distributions. The proposed algorithm, BDPG (Bayesian Distributional Policy Gradients), uses adversarial training in joint-contrastive learning to estimate a variational posterior from the returns. Moreover, we can now interpret the return prediction uncertainty as an information gain, which allows to obtain a new curiosity measure that helps BDPG steer exploration actively and efficiently. We demonstrate in a suite of Atari 2600 games and MuJoCo tasks, including well known hard-exploration challenges, how BDPG learns generally faster and with higher asymptotic performance than reference distributional RL algorithms.

Downloads

Published

2021-05-18

How to Cite

Li, L., & Faisal, A. A. (2021). Bayesian Distributional Policy Gradients. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8429-8437. https://doi.org/10.1609/aaai.v35i10.17024

Issue

Section

AAAI Technical Track on Machine Learning III