Multi-Level Compositional Reasoning for Interactive Instruction Following
DOI:
https://doi.org/10.1609/aaai.v37i1.25094Keywords:
CV: Vision for Robotics & Autonomous Driving, ROB: ApplicationsAbstract
Robotic agents performing domestic chores by natural language directives are required to master the complex job of navigating environment and interacting with objects in the environments. The tasks given to the agents are often composite thus are challenging as completing them require to reason about multiple subtasks, e.g., bring a cup of coffee. To address the challenge, we propose to divide and conquer it by breaking the task into multiple subgoals and attend to them individually for better navigation and interaction. We call it Multi-level Compositional Reasoning Agent (MCR-Agent). Specifically, we learn a three-level action policy. At the highest level, we infer a sequence of human-interpretable subgoals to be executed based on language instructions by a high-level policy composition controller. At the middle level, we discriminatively control the agent’s navigation by a master policy by alternating between a navigation policy and various independent interaction policies. Finally, at the lowest level, we infer manipulation actions with the corresponding object masks using the appropriate interaction policy. Our approach not only generates human interpretable subgoals but also achieves 2.03% absolute gain to comparable state of the arts in the efficiency metric (PLWSR in unseen set) without using rule-based planning or a semantic spatial memory. The code is available at https://github.com/yonseivnl/mcr-agent.Downloads
Published
2023-06-26
How to Cite
Bhambri, S., Kim, B., & Choi, J. (2023). Multi-Level Compositional Reasoning for Interactive Instruction Following. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 223-231. https://doi.org/10.1609/aaai.v37i1.25094
Issue
Section
AAAI Technical Track on Computer Vision I