Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding


  • Long Chen Tencent AI Lab, Shenzhen
  • Wenbo Ma Zhejiang University, Hangzhou
  • Jun Xiao Zhejiang University, Hangzhou
  • Hanwang Zhang Nanyang Technological University, Singapore
  • Shih-Fu Chang Columbia University, New York



Language and Vision, Multi-modal Vision, Language Grounding & Multi-modal NLP


The prevailing framework for solving referring expression grounding is based on a two-stage process: 1) detecting proposals with an object detector and 2) grounding the referent to one of the proposals. Existing two-stage solutions mostly focus on the grounding step, which aims to align the expressions with the proposals. In this paper, we argue that these methods overlook an obvious mismatch between the roles of proposals in the two stages: they generate proposals solely based on the detection confidence (i.e., expression-agnostic), hoping that the proposals contain all right instances in the expression (i.e., expression-aware). Due to this mismatch, current two-stage methods suffer from a severe performance drop between detected and ground-truth proposals. To this end, we propose Ref-NMS, which is the first method to yield expression-aware proposals at the first stage. Ref-NMS regards all nouns in the expression as critical objects, and introduces a lightweight module to predict a score for aligning each box with a critical object. These scores can guide the NMS operation to filter out the boxes irrelevant to the expression, increasing the recall of critical objects, resulting in a significantly improved grounding performance. Since Ref- NMS is agnostic to the grounding step, it can be easily integrated into any state-of-the-art two-stage method. Extensive ablation studies on several backbones, benchmarks, and tasks consistently demonstrate the superiority of Ref-NMS. Codes are available at:




How to Cite

Chen, L., Ma, W., Xiao, J., Zhang, H., & Chang, S.-F. (2021). Ref-NMS: Breaking Proposal Bottlenecks in Two-Stage Referring Expression Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1036-1044.



AAAI Technical Track on Computer Vision I