Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning

Authors

  • Rishabh Agrawal University of Southern California
  • Nathan Dahlin University at Albany, SUNY
  • Rahul Jain University of Southern California
  • Ashutosh Nayyar University of Southern California

DOI:

https://doi.org/10.1609/aaai.v39i15.33680

Abstract

Imitation learning (IL) is notably effective for robotic tasks where directly programming behaviors or defining optimal control costs is challenging. In this work, we address a scenario where the imitator relies solely on observed behavior and cannot make environmental interactions during learning. It does not have additional supplementary datasets beyond the expert's dataset nor any information about the transition dynamics. Unlike state-of-the-art (SOTA) IL methods, this approach tackles the limitations of conventional IL by operating in a more constrained and realistic setting. Our method uses the Markov balance equation and introduces a novel conditional density estimation-based imitation learning framework. It employs conditional normalizing flows for transition dynamics estimation and aims at satisfying a balance equation for the environment. Through a series of numerical experiments on Classic Control and MuJoCo environments, we demonstrate consistently superior empirical performance compared to many SOTA IL algorithms.

Downloads

Published

2025-04-11

How to Cite

Agrawal, R., Dahlin, N., Jain, R., & Nayyar, A. (2025). Markov Balance Satisfaction Improves Performance in Strictly Batch Offline Imitation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 15311-15319. https://doi.org/10.1609/aaai.v39i15.33680

Issue

Section

AAAI Technical Track on Machine Learning I