Fast Inverse Reinforcement Learning with Interval Consistent Graph for Driving Behavior Prediction

Authors

  • Masamichi Shimosaka Tokyo Institute of Technology
  • Junichi Sato The University of Tokyo
  • Kazuhito Takenaka Denso Corporation
  • Kentarou Hitomi Denso Corporation

DOI:

https://doi.org/10.1609/aaai.v31i1.10762

Abstract

Maximum entropy inverse reinforcement learning (MaxEnt IRL) is an effective approach for learning the underlying rewards of demonstrated human behavior, while it is intractable in high-dimensional state space due to the exponential growth of calculation cost. In recent years, a few works on approximating MaxEnt IRL in large state spaces by graphs provide successful results, however, types of state space models are quite limited. In this work, we extend them to more generic large state space models with graphs where time interval consistency of Markov decision processes are guaranteed. We validate our proposed method in the context of driving behavior prediction. Experimental results using actual driving data confirm the superiority of our algorithm in both prediction performance and computational cost over other existing IRL frameworks.

Downloads

Published

2017-02-12

How to Cite

Shimosaka, M., Sato, J., Takenaka, K., & Hitomi, K. (2017). Fast Inverse Reinforcement Learning with Interval Consistent Graph for Driving Behavior Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10762

Issue

Section

Main Track: Machine Learning Applications