Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization

Authors

  • Pierre Liotet Politecnico di Milano
  • Francesco Vidaich University of Padova
  • Alberto Maria Metelli Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v36i7.20717

Keywords:

Machine Learning (ML)

Abstract

Learning in a lifelong setting, where the dynamics continually evolve, is a hard challenge for current reinforcement learning algorithms. Yet this would be a much needed feature for practical applications. In this paper, we propose an approach which learns a hyper-policy, whose input is time, that outputs the parameters of the policy to be queried at that time. This hyper-policy is trained to maximize the estimated future performance, efficiently reusing past data by means of importance sampling, at the cost of introducing a controlled bias. We combine the future performance estimate with the past performance to mitigate catastrophic forgetting. To avoid overfitting the collected data, we derive a differentiable variance bound that we embed as a penalization term. Finally, we empirically validate our approach, in comparison with state-of-the-art algorithms, on realistic environments, including water resource management and trading.

Downloads

Published

2022-06-28

How to Cite

Liotet, P., Vidaich, F., Metelli, A. M., & Restelli, M. (2022). Lifelong Hyper-Policy Optimization with Multiple Importance Sampling Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7525-7533. https://doi.org/10.1609/aaai.v36i7.20717

Issue

Section

AAAI Technical Track on Machine Learning II