On the Role of Weight Sharing During Deep Option Learning


  • Matthew Riemer IBM Research
  • Ignacio Cases Stanford University
  • Clemens Rosenbaum University of Massachusetts
  • Miao Liu IBM Research
  • Gerald Tesauro IBM Research




The options framework is a popular approach for building temporally extended actions in reinforcement learning. In particular, the option-critic architecture provides general purpose policy gradient theorems for learning actions from scratch that are extended in time. However, past work makes the key assumption that each of the components of option-critic has independent parameters. In this work we note that while this key assumption of the policy gradient theorems of option-critic holds in the tabular case, it is always violated in practice for the deep function approximation setting. We thus reconsider this assumption and consider more general extensions of option-critic and hierarchical option-critic training that optimize for the full architecture with each update. It turns out that not assuming parameter independence challenges a belief in prior work that training the policy over options can be disentangled from the dynamics of the underlying options. In fact, learning can be sped up by focusing the policy over options on states where options are actually likely to terminate. We put our new algorithms to the test in application to sample efficient learning of Atari games, and demonstrate significantly improved stability and faster convergence when learning long options. 1




How to Cite

Riemer, M., Cases, I., Rosenbaum, C., Liu, M., & Tesauro, G. (2020). On the Role of Weight Sharing During Deep Option Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5519-5526. https://doi.org/10.1609/aaai.v34i04.6003



AAAI Technical Track: Machine Learning