Robust Multi-Modality Person Re-identification

Authors

  • Aihua Zheng Anhui University
  • Zi Wang Anhui University
  • Zihan Chen Anhui University
  • Chenglong Li Anhui University
  • Jin Tang Anhui University

DOI:

https://doi.org/10.1609/aaai.v35i4.16467

Keywords:

Image and Video Retrieval, Multi-modal Vision

Abstract

To avoid the illumination limitation in visible person re-identification (Re-ID) and the heterogeneous issue in cross-modality Re-ID, we propose to utilize complementary advantages of multiple modalities including visible (RGB), near infrared (NI) and thermal infrared (TI) ones for robust person Re-ID. A novel progressive fusion network is designed to learn effective multi-modal features from single to multiple modalities and from local to global views. Our method works well in diversely challenging scenarios even in the presence of missing modalities. Moreover, we contribute a comprehensive benchmark dataset, RGBNT201, including 201 identities captured from various challenging conditions, to facilitate the research of RGB-NI-TI multi-modality person Re-ID. Comprehensive experiments on RGBNT201 dataset comparing to the state-of-the-art methods demonstrate the contribution of multi-modality person Re-ID and the effectiveness of the proposed approach, which launch a new benchmark and a new baseline for multi-modality person Re-ID.

Downloads

Published

2021-05-18

How to Cite

Zheng, A., Wang, Z., Chen, Z., Li, C., & Tang, J. (2021). Robust Multi-Modality Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3529-3537. https://doi.org/10.1609/aaai.v35i4.16467

Issue

Section

AAAI Technical Track on Computer Vision III