Online Apprenticeship Learning

Authors

  • Lior Shani Technion – Israel Institute of Technology, Israel
  • Tom Zahavy Deepmind, UK
  • Shie Mannor Technion – Israel Institute of Technology, Israel Nvidia Research, Israel

DOI:

https://doi.org/10.1609/aaai.v36i8.20798

Keywords:

Machine Learning (ML), Reasoning Under Uncertainty (RU)

Abstract

In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function. Instead, we observe trajectories sampled by an expert that acts according to some policy. The goal is to find a policy that matches the expert's performance on some predefined set of cost functions. We introduce an online variant of AL (Online Apprenticeship Learning; OAL), where the agent is expected to perform comparably to the expert while interacting with the environment. We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms: one for policy optimization and another for learning the worst case cost. By employing optimistic exploration, we derive a convergent algorithm with O(sqrt(K)) regret, where K is the number of interactions with the MDP, and an additional linear error term that depends on the amount of expert trajectories available. Importantly, our algorithm avoids the need to solve an MDP at each iteration, making it more practical compared to prior AL methods. Finally, we implement a deep variant of our algorithm which shares some similarities to GAIL, but where the discriminator is replaced with the costs learned by OAL. Our simulations suggest that OAL performs well in high dimensional control problems.

Downloads

Published

2022-06-28

How to Cite

Shani, L., Zahavy, T., & Mannor, S. (2022). Online Apprenticeship Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8240-8248. https://doi.org/10.1609/aaai.v36i8.20798

Issue

Section

AAAI Technical Track on Machine Learning III