Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization
DOI:
https://doi.org/10.1609/aaai.v37i8.26109Keywords:
ML: Reinforcement Learning Theory, ML: Causal LearningAbstract
In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another that is shared. Consider a set of environments that share the laws of motion as an example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide a tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that guarantees any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.Downloads
Published
2023-06-26
How to Cite
Mutti, M., De Santi, R., Rossi, E., Calderon, J. F., Bronstein, M., & Restelli, M. (2023). Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9251-9259. https://doi.org/10.1609/aaai.v37i8.26109
Issue
Section
AAAI Technical Track on Machine Learning III