Joint Color-irrelevant Consistency Learning and Identity-aware Modality Adaptation for Visible-infrared Cross Modality Person Re-identification

Authors

  • Zhiwei Zhao School of Information Science and Technology, University of Science and Technology of China, Hefei, China Key Laboratory of Electromagnetic Space Information, Chinese Academy of Science, Hefei, China
  • Bin Liu School of Information Science and Technology, University of Science and Technology of China, Hefei, China Key Laboratory of Electromagnetic Space Information, Chinese Academy of Science, Hefei, China
  • Qi Chu School of Information Science and Technology, University of Science and Technology of China, Hefei, China Key Laboratory of Electromagnetic Space Information, Chinese Academy of Science, Hefei, China
  • Yan Lu School of Information Science and Technology, University of Science and Technology of China, Hefei, China Key Laboratory of Electromagnetic Space Information, Chinese Academy of Science, Hefei, China
  • Nenghai Yu School of Information Science and Technology, University of Science and Technology of China, Hefei, China Key Laboratory of Electromagnetic Space Information, Chinese Academy of Science, Hefei, China

DOI:

https://doi.org/10.1609/aaai.v35i4.16466

Keywords:

Image and Video Retrieval, Multi-modal Vision

Abstract

Visible-infrared cross modality person re-identification (VI-ReID) is a core but challenging technology in the 24-hours intelligent surveillance system. How to eliminate the large modality gap lies in the heart of VI-ReID. Conventional methods mainly focus on directly aligning the heterogeneous modalities into the same space. However, due to the unbalanced color information between the visible and infrared images, the features of visible images tend to overfit the clothing color information, which would be harmful to the modality alignment. Besides, these methods mainly align the heterogeneous feature distributions in dataset-level while ignoring the valuable identity information, which may cause the feature misalignment of some identities and weaken the discrimination of features. To tackle above problems, we propose a novel approach for VI-ReID. It learns the color-irrelevant features through the color-irrelevant consistency learning (CICL) and aligns the identity-level feature distributions by the identity-aware modality adaptation (IAMA). The CICL and IAMA are integrated into a joint learning framework and can promote each other. Extensive experiments on two popular datasets SYSU-MM01 and RegDB demonstrate the superiority and effectiveness of our approach against the state-of-the-art methods.

Downloads

Published

2021-05-18

How to Cite

Zhao, Z., Liu, B., Chu, Q., Lu, Y., & Yu, N. (2021). Joint Color-irrelevant Consistency Learning and Identity-aware Modality Adaptation for Visible-infrared Cross Modality Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3520-3528. https://doi.org/10.1609/aaai.v35i4.16466

Issue

Section

AAAI Technical Track on Computer Vision III