Source-Target Inference Models for Spatial Instruction Understanding


  • Hao Tan The University of North Carolina at Chapel Hill
  • Mohit Bansal The University of North Carolina at Chapel Hill



Models that can execute natural language instructions for situated robotic tasks such as assembly and navigation have several useful applications in homes, offices, and remote scenarios.We study the semantics of spatially-referred configuration and arrangement instructions, based on the challenging Bisk-2016 blank-labeled block dataset. This task involves finding a source block and moving it to the target position (mentioned via a reference block and offset), where the blocks have no names or colors and are just referred to via spatial location features.We present novel models for the subtasks of source block classification and target position regression, based on joint-loss language and spatial-world representation learning, as well as CNN-based and dual attention models to compute the alignment between the world blocks and the instruction phrases. For target position prediction, we compare two inference approaches: annealed sampling via policy gradient versus expectation inference via supervised regression. Our models achieve the new state-of-the-art on this task, with an improvement of 47% on source block accuracy and 22% on target position distance.




How to Cite

Tan, H., & Bansal, M. (2018). Source-Target Inference Models for Spatial Instruction Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).