Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search

Authors

  • Ya Jing Chinese Academy of Sciences
  • Chenyang Si Chinese Academy of Sciences
  • Junbo Wang Chinese Academy of Sciences
  • Wei Wang Chinese Academy of Sciences
  • Liang Wang Chinese Academy of Sciences
  • Tieniu Tan Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v34i07.6777

Abstract

Text-based person search aims to retrieve the corresponding person images in an image database by virtue of a describing sentence about the person, which poses great potential for various applications such as video surveillance. Extracting visual contents corresponding to the human description is the key to this cross-modal matching problem. Moreover, correlated images and descriptions involve different granularities of semantic relevance, which is usually ignored in previous methods. To exploit the multilevel corresponding visual contents, we propose a pose-guided multi-granularity attention network (PMA). Firstly, we propose a coarse alignment network (CA) to select the related image regions to the global description by a similarity-based attention. To further capture the phrase-related visual body part, a fine-grained alignment network (FA) is proposed, which employs pose information to learn latent semantic alignment between visual body part and textual noun phrase. To verify the effectiveness of our model, we perform extensive experiments on the CUHK Person Description Dataset (CUHK-PEDES) which is currently the only available dataset for text-based person search. Experimental results show that our approach outperforms the state-of-the-art methods by 15 % in terms of the top-1 metric.

Downloads

Published

2020-04-03

How to Cite

Jing, Y., Si, C., Wang, J., Wang, W., Wang, L., & Tan, T. (2020). Pose-Guided Multi-Granularity Attention Network for Text-Based Person Search. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11189-11196. https://doi.org/10.1609/aaai.v34i07.6777

Issue

Section

AAAI Technical Track: Vision