Modality-Adaptive Mixup and Invariant Decomposition for RGB-Infrared Person Re-identification
Keywords:Computer Vision (CV)
AbstractRGB-infrared person re-identification is an emerging cross-modality re-identification task, which is very challenging due to significant modality discrepancy between RGB and infrared images. In this work, we propose a novel modality-adaptive mixup and invariant decomposition (MID) approach for RGB-infrared person re-identification towards learning modality-invariant and discriminative representations. MID designs a modality-adaptive mixup scheme to generate suitable mixed modality images between RGB and infrared images for mitigating the inherent modality discrepancy at the pixel-level. It formulates modality mixup procedure as Markov decision process, where an actor-critic agent learns dynamical and local linear interpolation policy between different regions of cross-modality images under a deep reinforcement learning framework. Such policy guarantees modality-invariance in a more continuous latent space and avoids manifold intrusion by the corrupted mixed modality samples. Moreover, to further counter modality discrepancy and enforce invariant visual semantics at the feature-level, MID employs modality-adaptive convolution decomposition to disassemble a regular convolution layer into modality-specific basis layers and a modality-shared coefficient layer. Extensive experimental results on two challenging benchmarks demonstrate superior performance of MID over state-of-the-art methods.
How to Cite
Huang, Z., Liu, J., Li, L., Zheng, K., & Zha, Z.-J. (2022). Modality-Adaptive Mixup and Invariant Decomposition for RGB-Infrared Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 1034-1042. https://doi.org/10.1609/aaai.v36i1.19987
AAAI Technical Track on Computer Vision I