An Efficient Approach to Model-Based Hierarchical Reinforcement Learning

Authors

  • Zhuoru Li National University of Singapore
  • Akshay Narayan National University of Singapore
  • Tze-Yun Leong National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v31i1.11024

Keywords:

reinforcement learning, hierarchical reinforcement learning, MAXQ, R-MAX, model-based reinforcement learning

Abstract

We propose a model-based approach to hierarchical reinforcement learning that exploits shared knowledge and selective execution at different levels of abstraction, to efficiently solve large, complex problems. Our framework adopts a new transition dynamics learning algorithm that identifies the common action-feature combinations of the subtasks, and evaluates the subtask execution choices through simulation. The framework is sample efficient, and tolerates uncertain and incomplete problem characterization of the subtasks. We test the framework on common benchmark problems and complex simulated robotic environments. It compares favorably against the state-of-the-art algorithms, and scales well in very large problems.

Downloads

Published

2017-02-12

How to Cite

Li, Z., Narayan, A., & Leong, T.-Y. (2017). An Efficient Approach to Model-Based Hierarchical Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11024