TY - JOUR AU - Bi, Jing AU - Dhiman, Vikas AU - Xiao, Tianyou AU - Xu, Chenliang PY - 2020/04/03 Y2 - 2024/03/29 TI - Learning from Interventions Using Hierarchical Policies for Safe Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 06 SE - AAAI Technical Track: Robotics DO - 10.1609/aaai.v34i06.6602 UR - https://ojs.aaai.org/index.php/AAAI/article/view/6602 SP - 10352-10360 AB - <p>Learning from Demonstrations (LfD) via Behavior Cloning (BC) works well on multiple complex tasks. However, a limitation of the typical LfD approach is that it requires expert demonstrations for all scenarios, including those in which the algorithm is already well-trained. The recently proposed Learning from Interventions (LfI) overcomes this limitation by using an expert overseer. The expert overseer only intervenes when it suspects that an unsafe action is about to be taken. Although LfI significantly improves over LfD, the state-of-the-art LfI fails to account for delay caused by the expert's reaction time and only learns short-term behavior. We address these limitations by 1) interpolating the expert's interventions back in time, and 2) by splitting the policy into two hierarchical levels, one that generates sub-goals for the future and another that generates actions to reach those desired sub-goals. This sub-goal prediction forces the algorithm to learn long-term behavior while also being robust to the expert's reaction time. Our experiments show that LfI using sub-goals in a hierarchical policy framework trains faster and achieves better asymptotic performance than typical LfD.</p> ER -