Exploring Visual Context for Weakly Supervised Person Search

Authors

  • Yichao Yan Shanghai Jiao Tong University
  • Jinpeng Li Inception Institute of Artificial Intelligence
  • Shengcai Liao Inception Institute of Artificial Intelligence
  • Jie Qin Inception Institute of Artificial Intelligence
  • Bingbing Ni Shanghai Jiao Tong University
  • Ke Lu University of Chinese Academy of Sciences
  • Xiaokang Yang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v36i3.20209

Keywords:

Computer Vision (CV)

Abstract

Person search has recently emerged as a challenging task that jointly addresses pedestrian detection and person re-identification. Existing approaches follow a fully supervised setting where both bounding box and identity annotations are available. However, annotating identities is labor-intensive, limiting the practicability and scalability of current frameworks. This paper inventively considers weakly supervised person search with only bounding box annotations. We propose to address this novel task by investigating three levels of context clues (i.e., detection, memory and scene) in unconstrained natural images. The first two are employed to promote local and global discriminative capabilities, while the latter enhances clustering accuracy. Despite its simple design, our CGPS boosts the baseline model by 8.8% in mAP on CUHK-SYSU. Surprisingly, it even achieves comparable performance with several supervised person search models. Our code is available at https://github. com/ljpadam/CGPS.

Downloads

Published

2022-06-28

How to Cite

Yan, Y., Li, J., Liao, S., Qin, J., Ni, B., Lu, K., & Yang, X. (2022). Exploring Visual Context for Weakly Supervised Person Search. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3027-3035. https://doi.org/10.1609/aaai.v36i3.20209

Issue

Section

AAAI Technical Track on Computer Vision III