Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning

Authors

  • Zizhao Wang the University of Texas at Austin
  • Caroline Wang the University of Texas at Austin
  • Xuesu Xiao George Mason University
  • Yuke Zhu the University of Texas at Austin
  • Peter Stone the University of Texas at Austin Sony AI

DOI:

https://doi.org/10.1609/aaai.v38i14.29507

Keywords:

ML: Reinforcement Learning, ML: Causal Learning

Abstract

Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is to learn state abstractions, which only keep the necessary variables for learning the tasks at hand. This paper introduces Causal Bisimulation Modeling (CBM), a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction. CBM leverages and improves implicit modeling to train a high-fidelity causal dynamics model that can be reused for all tasks in the same environment. Empirical validation on two manipulation environments and four tasks reveals that CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones. Furthermore, the derived state abstractions allow a task learner to achieve near-oracle levels of sample efficiency and outperform baselines on all tasks.

Downloads

Published

2024-03-24

How to Cite

Wang, Z., Wang, C., Xiao, X., Zhu, Y., & Stone, P. (2024). Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15778-15786. https://doi.org/10.1609/aaai.v38i14.29507

Issue

Section

AAAI Technical Track on Machine Learning V