Autonomous Option Invention for Continual Hierarchical Reinforcement Learning and Planning

Authors

  • Rashmeet Kaur Nayyar Arizona State University
  • Siddharth Srivastava Arizona State University

DOI:

https://doi.org/10.1609/aaai.v39i18.34163

Abstract

Abstraction is key to scaling up reinforcement learning (RL). However, autonomously learning abstract state and action representations to enable transfer and generalization remains a challenging open problem. This paper presents a novel approach for inventing, representing, and utilizing options, which represent temporally extended behaviors, in continual RL settings. Our approach addresses streams of stochastic problems characterized by long horizons, sparse rewards, and unknown transition and reward functions. Our approach continually learns and maintains an interpretable state abstraction, and uses it to invent high-level options with abstract symbolic representations. These options meet three key desiderata: (1) composability for solving tasks effectively with lookahead planning, (2) reusability across problem instances for minimizing the need for relearning, and (3) mutual independence for reducing interference among options. Our main contributions are approaches for continually learning transferable, generalizable options with symbolic representations, and for integrating search techniques with RL to efficiently plan over these learned options to solve new problems. Empirical results demonstrate that the resulting approach effectively learns and transfers abstract knowledge across problem instances, achieving superior sample efficiency compared to state-of-the-art methods.

Downloads

Published

2025-04-11

How to Cite

Nayyar, R. K., & Srivastava, S. (2025). Autonomous Option Invention for Continual Hierarchical Reinforcement Learning and Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19642–19650. https://doi.org/10.1609/aaai.v39i18.34163

Issue

Section

AAAI Technical Track on Machine Learning IV