Apprenticeship Learning via Frank-Wolfe

Authors

  • Tom Zahavy Google
  • Alon Cohen Google
  • Haim Kaplan Google
  • Yishay Mansour Google

DOI:

https://doi.org/10.1609/aaai.v34i04.6150

Abstract

We consider the applications of the Frank-Wolfe (FW) algorithm for Apprenticeship Learning (AL). In this setting, we are given a Markov Decision Process (MDP) without an explicit reward function. Instead, we observe an expert that acts according to some policy, and the goal is to find a policy whose feature expectations are closest to those of the expert policy. We formulate this problem as finding the projection of the feature expectations of the expert on the feature expectations polytope – the convex hull of the feature expectations of all the deterministic policies in the MDP. We show that this formulation is equivalent to the AL objective and that solving this problem using the FW algorithm is equivalent well-known Projection method of Abbeel and Ng (2004). This insight allows us to analyze AL with tools from convex optimization literature and derive tighter convergence bounds on AL. Specifically, we show that a variation of the FW method that is based on taking “away steps” achieves a linear rate of convergence when applied to AL and that a stochastic version of the FW algorithm can be used to avoid precise estimation of feature expectations. We also experimentally show that this version outperforms the FW baseline. To the best of our knowledge, this is the first work that shows linear convergence rates for AL.

Downloads

Published

2020-04-03

How to Cite

Zahavy, T., Cohen, A., Kaplan, H., & Mansour, Y. (2020). Apprenticeship Learning via Frank-Wolfe. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6720-6728. https://doi.org/10.1609/aaai.v34i04.6150

Issue

Section

AAAI Technical Track: Machine Learning