Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning

Authors

  • Guoxi Zhang Graduate School of Informatics, Kyoto University
  • Hisashi Kashima Graduate School of Informatics, Kyoto University RIKEN Guardian Robot Project

DOI:

https://doi.org/10.1609/aaai.v37i9.26326

Keywords:

ML: Reinforcement Learning Algorithms, ROB: Behavior Learning & Control, ROB: Learning & Optimization for ROB

Abstract

Offline reinforcement learning (RL) have received rising interest due to its appealing data efficiency. The present study addresses behavior estimation, a task that aims at estimating the data-generating policy. In particular, this work considers a scenario where data are collected from multiple sources. Neglecting data heterogeneity, existing approaches cannot provide good estimates and impede policy learning. To overcome this drawback, the present study proposes a latent variable model and a model-learning algorithm to infer a set of policies from data, which allows an agent to use as behavior policy the policy that best describes a particular trajectory. To illustrate the benefit of such a fine-grained characterization for multi-source data, this work showcases how the proposed model can be incorporated into an existing offline RL algorithm. Lastly, with extensive empirical evaluation this work confirms the risks of neglecting data heterogeneity and the efficacy of the proposed model.

Downloads

Published

2023-06-26

How to Cite

Zhang, G., & Kashima, H. (2023). Behavior Estimation from Multi-Source Data for Offline Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11201-11209. https://doi.org/10.1609/aaai.v37i9.26326

Issue

Section

AAAI Technical Track on Machine Learning IV