AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification

Authors

  • Ammarah Farooq CVSSP, University of Surrey
  • Muhammad Awais CVSSP, University of Surrey Surrey Institute for People-centred AI (SI-PAI) Sensus Futuris Ltd.
  • Josef Kittler CVSSP, University of Surrey Surrey Institute for People-centred AI (SI-PAI) Sensus Futuris Ltd.
  • Syed Safwan Khalid CVSSP, University of Surrey

DOI:

https://doi.org/10.1609/aaai.v36i4.20370

Keywords:

Domain(s) Of Application (APP)

Abstract

Cross-modal person re-identification (Re-ID) is critical for modern video surveillance systems. The key challenge is to align cross-modality representations conforming to semantic information present for a person and ignore background information. This work presents a novel convolutional neural network (CNN) based architecture designed to learn semantically aligned cross-modal visual and textual representations. The underlying building block, named AXM-Block, is a unified multi-layer network that dynamically exploits the multi-scale knowledge from both modalities and re-calibrates each modality according to shared semantics. To complement the convolutional design, contextual attention is applied in the text branch to manipulate long-term dependencies. Moreover, we propose a unique design to enhance visual part-based feature coherence and locality information. Our framework is novel in its ability to implicitly learn aligned semantics between modalities during the feature learning stage. The unified feature learning effectively utilizes textual data as a super-annotation signal for visual representation learning and automatically rejects irrelevant information. The entire AXM-Net is trained end-to-end on CUHK-PEDES data. We report results on two tasks, person search and cross-modal Re-ID. The AXM-Net outperforms the current state-of-the-art (SOTA) methods and achieves 64.44% Rank@1 on the CUHK-PEDES test set. It also outperforms by >10% for cross-viewpoint text-to-image Re-ID scenarios on CrossRe-ID and CUHK-SYSU datasets.

Downloads

Published

2022-06-28

How to Cite

Farooq, A., Awais, M., Kittler, J., & Khalid, S. S. (2022). AXM-Net: Implicit Cross-Modal Feature Alignment for Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4477-4485. https://doi.org/10.1609/aaai.v36i4.20370

Issue

Section

AAAI Technical Track on Domain(s) Of Application