Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning

Authors

  • Souradip Chakraborty University of Maryland, College Park, USA
  • Amrit Singh Bedi University of Maryland, College Park, USA
  • Pratap Tokekar University of Maryland, College Park, USA
  • Alec Koppel JP Morgan AI Research, NY, USA
  • Brian Sadler DEVCOM Army Research Laboratory, USA
  • Furong Huang University of Maryland, College Park, USA
  • Dinesh Manocha University of Maryland, College Park, USA

DOI:

https://doi.org/10.1609/aaai.v37i6.25853

Keywords:

ML: Reinforcement Learning Theory, ML: Bayesian Learning, ML: Kernel Methods, ML: Reinforcement Learning Algorithms, ML: Scalability of ML Systems, RU: Sequential Decision Making

Abstract

Model-based approaches to reinforcement learning (MBRL) exhibit favorable performance in practice, but their theoretical guarantees in large spaces are mostly restricted to the setting when transition model is Gaussian or Lipschitz, and demands a posterior estimate whose representational complexity grows unbounded with time. In this work, we develop a novel MBRL method (i) which relaxes the assumptions on the target transition model to belong to a generic family of mixture models; (ii) is applicable to large-scale training by incorporating a compression step such that the posterior estimate consists of a Bayesian coreset of only statistically significant past state-action pairs; and (iii) exhibits a sublinear Bayesian regret. To achieve these results, we adopt an approach based upon Stein's method, which, under a smoothness condition on the constructed posterior and target, allows distributional distance to be evaluated in closed form as the kernelized Stein discrepancy (KSD). The aforementioned compression step is then computed in terms of greedily retaining only those samples which are more than a certain KSD away from the previous model estimate. Experimentally, we observe that this approach is competitive with several state-of-the-art RL methodologies, and can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.

Downloads

Published

2023-06-26

How to Cite

Chakraborty, S., Bedi, A. S., Tokekar, P., Koppel, A., Sadler, B., Huang, F., & Manocha, D. (2023). Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 6980-6988. https://doi.org/10.1609/aaai.v37i6.25853

Issue

Section

AAAI Technical Track on Machine Learning I