Deterministic and Discriminative Imitation (D2-Imitation): Revisiting Adversarial Imitation for Sample Efficiency

Authors

  • Mingfei Sun University of Oxford
  • Sam Devlin Microsoft Research
  • Katja Hofmann Microsoft Research
  • Shimon Whiteson University of Oxford

DOI:

https://doi.org/10.1609/aaai.v36i8.20813

Keywords:

Machine Learning (ML)

Abstract

Sample efficiency is crucial for imitation learning methods to be applicable in real-world applications. Many studies improve sample efficiency by extending adversarial imitation to be off-policy regardless of the fact that these off-policy extensions could either change the original objective or involve complicated optimization. We revisit the foundation of adversarial imitation and propose an off-policy sample efficient approach that requires no adversarial training or min-max optimization. Our formulation capitalizes on two key insights: (1) the similarity between the Bellman equation and the stationary state-action distribution equation allows us to derive a novel temporal difference (TD) learning approach; and (2) the use of a deterministic policy simplifies the TD learning. Combined, these insights yield a practical algorithm, Deterministic and Discriminative Imitation (D2-Imitation), which oper- ates by first partitioning samples into two replay buffers and then learning a deterministic policy via off-policy reinforcement learning. Our empirical results show that D2-Imitation is effective in achieving good sample efficiency, outperforming several off-policy extension approaches of adversarial imitation on many control tasks.

Downloads

Published

2022-06-28

How to Cite

Sun, M., Devlin, S., Hofmann, K., & Whiteson, S. (2022). Deterministic and Discriminative Imitation (D2-Imitation): Revisiting Adversarial Imitation for Sample Efficiency. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8378-8385. https://doi.org/10.1609/aaai.v36i8.20813

Issue

Section

AAAI Technical Track on Machine Learning III