Causal Transfer for Imitation Learning and Decision Making under Sensor-Shift

Authors

  • Jalal Etesami BCAI
  • Philipp Geiger BCAI

DOI:

https://doi.org/10.1609/aaai.v34i06.6571

Abstract

Learning from demonstrations (LfD) is an efficient paradigm to train AI agents. But major issues arise when there are differences between (a) the demonstrator's own sensory input, (b) our sensors that observe the demonstrator and (c) the sensory input of the agent we train.

In this paper, we propose a causal model-based framework for transfer learning under such “sensor-shifts”, for two common LfD tasks: (1) inferring the effect of the demonstrator's actions and (2) imitation learning. First we rigorously analyze, on the population-level, to what extent the relevant underlying mechanisms (the action effects and the demonstrator policy) can be identified and transferred from the available observations together with prior knowledge of sensor characteristics. And we device an algorithm to infer these mechanisms. Then we introduce several proxy methods which are easier to calculate, estimate from finite data and interpret than the exact solutions, alongside theoretical bounds on their closeness to the exact ones. We validate our two main methods on simulated and semi-real world data.

Downloads

Published

2020-04-03

How to Cite

Etesami, J., & Geiger, P. (2020). Causal Transfer for Imitation Learning and Decision Making under Sensor-Shift. Proceedings of the AAAI Conference on Artificial Intelligence, 34(06), 10118-10125. https://doi.org/10.1609/aaai.v34i06.6571

Issue

Section

AAAI Technical Track: Reasoning under Uncertainty