Unlabeled Imperfect Demonstrations in Adversarial Imitation Learning
DOI:
https://doi.org/10.1609/aaai.v37i8.26222Keywords:
ML: Adversarial Learning & Robustness, ML: Imitation Learning & Inverse Reinforcement LearningAbstract
Adversarial imitation learning has become a widely used imitation learning framework. The discriminator is often trained by taking expert demonstrations and policy trajectories as examples respectively from two categories (positive vs. negative) and the policy is then expected to produce trajectories that are indistinguishable from the expert demonstrations. But in the real world, the collected expert demonstrations are more likely to be imperfect, where only an unknown fraction of the demonstrations are optimal. Instead of treating imperfect expert demonstrations as absolutely positive or negative, we investigate unlabeled imperfect expert demonstrations as they are. A positive-unlabeled adversarial imitation learning algorithm is developed to dynamically sample expert demonstrations that can well match the trajectories from the constantly optimized agent policy. The trajectories of an initial agent policy could be closer to those non-optimal expert demonstrations, but within the framework of adversarial imitation learning, agent policy will be optimized to cheat the discriminator and produce trajectories that are similar to those optimal expert demonstrations. Theoretical analysis shows that our method learns from the imperfect demonstrations via a self-paced way. Experimental results on MuJoCo and RoboSuite platforms demonstrate the effectiveness of our method from different aspects.Downloads
Published
2023-06-26
How to Cite
Wang, Y., Du, B., & Xu, C. (2023). Unlabeled Imperfect Demonstrations in Adversarial Imitation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 10262-10270. https://doi.org/10.1609/aaai.v37i8.26222
Issue
Section
AAAI Technical Track on Machine Learning III