Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning

Authors

  • Rashmeet Kaur Nayyar Arizona State University

DOI:

https://doi.org/10.1609/aaai.v38i21.30402

Keywords:

Hierarchical Planning And Reinforcement Learning, Transfer And Generalization, Learning Abstractions, Option Discovery, Representation Learning

Abstract

Reinforcement Learning (RL) in complex environments presents many challenges: agents require learning concise representations of both environments and behaviors for efficient reasoning and generalizing experiences to new, unseen situations. However, RL approaches can be sample-inefficient and difficult to scale, especially in long-horizon sparse reward settings. To address these issues, the goal of my doctoral research is to develop methods that automatically construct semantically meaningful state and temporal abstractions for efficient transfer and generalization. In my work, I develop hierarchical approaches for learning transferable, generalizable knowledge in the form of symbolically represented options, as well as for integrating search techniques with RL to solve new problems by efficiently composing the learned options. Empirical results show that the resulting approaches effectively learn and transfer knowledge, achieving superior sample efficiency compared to SOTA methods while also enhancing interpretability.

Downloads

Published

2024-03-24

How to Cite

Nayyar, R. K. (2024). Learning Generalizable and Composable Abstractions for Transfer in Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23403-23404. https://doi.org/10.1609/aaai.v38i21.30402