Inverse Reinforcement Learning by Estimating Expertise of Demonstrators
DOI:
https://doi.org/10.1609/aaai.v39i15.33705Abstract
In Imitation Learning (IL), utilizing suboptimal and heterogeneous demonstrations presents a substantial challenge due to the varied nature of real-world data. However, standard IL algorithms consider these datasets as homogeneous, thereby inheriting the deficiencies of suboptimal demonstrators. Previous approaches to this issue rely on impractical assumptions like high-quality data subsets, confidence rankings, or explicit environmental knowledge. This paper introduces IRLEED, *Inverse Reinforcement Learning by Estimating Expertise of Demonstrators*, a novel framework that overcomes these hurdles without prior knowledge of demonstrator expertise. IRLEED enhances existing Inverse Reinforcement Learning (IRL) algorithms by combining a general model for demonstrator suboptimality to address reward bias and action variance, with a Maximum Entropy IRL framework to efficiently derive the optimal policy from diverse, suboptimal demonstrations. Experiments in both online and offline IL settings, with simulated and human-generated data, demonstrate IRLEED's adaptability and effectiveness, making it a versatile solution for learning from suboptimal demonstrations.Downloads
Published
2025-04-11
How to Cite
Beliaev, M., & Pedarsani, R. (2025). Inverse Reinforcement Learning by Estimating Expertise of Demonstrators. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 15532-15540. https://doi.org/10.1609/aaai.v39i15.33705
Issue
Section
AAAI Technical Track on Machine Learning I