ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation

Authors

  • Yuewen Sun Mohamed bin Zayed University of Artificial Intelligence Carnegie Mellon University
  • Erli Wang NEC Labs, China
  • Biwei Huang University of California San Diego
  • Chaochao Lu Shanghai AI Laboratory
  • Lu Feng NEC Labs, China
  • Changyin Sun Anhui University
  • Kun Zhang Mohamed bin Zayed University of Artificial Intelligence Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v38i14.29442

Keywords:

ML: Reinforcement Learning, RU: Causality

Abstract

Data augmentation plays a crucial role in improving the data efficiency of reinforcement learning (RL). However, the generation of high-quality augmented data remains a significant challenge. To overcome this, we introduce ACAMDA (Adversarial Causal Modeling for Data Augmentation), a novel framework that integrates two causality-based tasks: causal structure recovery and counterfactual estimation. The unique aspect of ACAMDA lies in its ability to recover temporal causal relationships from limited non-expert datasets. The identification of the sequential cause-and-effect allows the creation of realistic yet unobserved scenarios. We utilize this characteristic to generate guided counterfactual datasets, which, in turn, substantially reduces the need for extensive data collection. By simulating various state-action pairs under hypothetical actions, ACAMDA enriches the training dataset for diverse and heterogeneous conditions. Our experimental evaluation shows that ACAMDA outperforms existing methods, particularly when applied to novel and unseen domains.

Published

2024-03-24

How to Cite

Sun, Y., Wang, E., Huang, B., Lu, C., Feng, L., Sun, C., & Zhang, K. (2024). ACAMDA: Improving Data Efficiency in Reinforcement Learning through Guided Counterfactual Data Augmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 15193-15201. https://doi.org/10.1609/aaai.v38i14.29442

Issue

Section

AAAI Technical Track on Machine Learning V