LCTR: On Awakening the Local Continuity of Transformer for Weakly Supervised Object Localization


  • Zhiwei Chen Xiamen University
  • Changan Wang Tencent Youtu Lab
  • Yabiao Wang Tencent Youtu Lab
  • Guannan Jiang CATL
  • Yunhang Shen Tencent Youtu Lab
  • Ying Tai Tencent Youtu Lab
  • Chengjie Wang Tencent Youtu Lab
  • Wei Zhang CATL
  • Liujuan Cao Xiamen University



Computer Vision (CV)


Weakly supervised object localization (WSOL) aims to learn object localizer solely by using image-level labels. The convolution neural network (CNN) based techniques often result in highlighting the most discriminative part of objects while ignoring the entire object extent. Recently, the transformer architecture has been deployed to WSOL to capture the long-range feature dependencies with self-attention mechanism and multilayer perceptron structure. Nevertheless, transformers lack the locality inductive bias inherent to CNNs and therefore may deteriorate local feature details in WSOL. In this paper, we propose a novel framework built upon the transformer, termed LCTR (Local Continuity TRansformer), which targets at enhancing the local perception capability of global features among long-range feature dependencies. To this end, we propose a relational patch-attention module (RPAM), which considers cross-patch information on a global basis. We further design a cue digging module (CDM), which utilizes local features to guide the learning trend of the model for highlighting the weak local responses. Finally, comprehensive experiments are carried out on two widely used datasets, ie, CUB-200-2011 and ILSVRC, to verify the effectiveness of our method.




How to Cite

Chen, Z., Wang, C., Wang, Y., Jiang, G., Shen, Y., Tai, Y., Wang, C., Zhang, W., & Cao, L. (2022). LCTR: On Awakening the Local Continuity of Transformer for Weakly Supervised Object Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 410-418.



AAAI Technical Track on Computer Vision I