Tight Performance Guarantees of Imitator Policies with Continuous Actions

Authors

  • Davide Maran Politecnico di Milano
  • Alberto Maria Metelli Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v37i8.26089

Keywords:

ML: Imitation Learning & Inverse Reinforcement Learning, ML: Reinforcement Learning Algorithms, ML: Reinforcement Learning Theory, ML: Learning Theory

Abstract

Behavioral Cloning (BC) aims at learning a policy that mimics the behavior demonstrated by an expert. The current theoretical understanding of BC is limited to the case of finite actions. In this paper, we study BC with the goal of providing theoretical guarantees on the performance of the imitator policy in the case of continuous actions. We start by deriving a novel bound on the performance gap based on Wasserstein distance, applicable for continuous-action experts, holding under the assumption that the value function is Lipschitz continuous. Since this latter condition is hardy fulfilled in practice, even for Lipschitz Markov Decision Processes and policies, we propose a relaxed setting, proving that value function is always H\"older continuous. This result is of independent interest and allows obtaining in BC a general bound for the performance of the imitator policy. Finally, we analyze noise injection, a common practice in which the expert's action is executed in the environment after the application of a noise kernel. We show that this practice allows deriving stronger performance guarantees, at the price of a bias due to the noise addition.

Downloads

Published

2023-06-26

How to Cite

Maran, D., Metelli, A. M., & Restelli, M. (2023). Tight Performance Guarantees of Imitator Policies with Continuous Actions. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9073-9080. https://doi.org/10.1609/aaai.v37i8.26089

Issue

Section

AAAI Technical Track on Machine Learning III