Hybrid Reinforcement Learning with Expert State Sequences

Authors

  • Xiaoxiao Guo IBM Research
  • Shiyu Chang IBM Research
  • Mo Yu IBM T. J. Watson
  • Gerald Tesauro IBM Research
  • Murray Campbell IBM Research

DOI:

https://doi.org/10.1609/aaai.v33i01.33013739

Abstract

Existing imitation learning approaches often require that the complete demonstration data, including sequences of actions and states, are available. In this paper, we consider a more realistic and difficult scenario where a reinforcement learning agent only has access to the state sequences of an expert, while the expert actions are unobserved. We propose a novel tensor-based model to infer the unobserved actions of the expert state sequences. The policy of the agent is then optimized via a hybrid objective combining reinforcement learning and imitation learning. We evaluated our hybrid approach on an illustrative domain and Atari games. The empirical results show that (1) the agents are able to leverage state expert sequences to learn faster than pure reinforcement learning baselines, (2) our tensor-based action inference model is advantageous compared to standard deep neural networks in inferring expert actions, and (3) the hybrid policy optimization objective is robust against noise in expert state sequences.

Downloads

Published

2019-07-17

How to Cite

Guo, X., Chang, S., Yu, M., Tesauro, G., & Campbell, M. (2019). Hybrid Reinforcement Learning with Expert State Sequences. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3739-3746. https://doi.org/10.1609/aaai.v33i01.33013739

Issue

Section

AAAI Technical Track: Machine Learning