TY - JOUR AU - Jia, Jian AU - Gao, Naiyu AU - He, Fei AU - Chen, Xiaotang AU - Huang, Kaiqi PY - 2022/06/28 Y2 - 2024/03/28 TI - Learning Disentangled Attribute Representations for Robust Pedestrian Attribute Recognition JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 1 SE - AAAI Technical Track on Computer Vision I DO - 10.1609/aaai.v36i1.19991 UR - https://ojs.aaai.org/index.php/AAAI/article/view/19991 SP - 1069-1077 AB - Although various methods have been proposed for pedestrian attribute recognition, most studies follow the same feature learning mechanism, \ie, learning a shared pedestrian image feature to classify multiple attributes. However, this mechanism leads to low-confidence predictions and non-robustness of the model in the inference stage. In this paper, we investigate why this is the case. We mathematically discover that the central cause is that the optimal shared feature cannot maintain high similarities with multiple classifiers simultaneously in the context of minimizing classification loss. In addition, this feature learning mechanism ignores the spatial and semantic distinctions between different attributes. To address these limitations, we propose a novel disentangled attribute feature learning (DAFL) framework to learn a disentangled feature for each attribute, which exploits the semantic and spatial characteristics of attributes. The framework mainly consists of learnable semantic queries, a cascaded semantic-spatial cross-attention (SSCA) module, and a group attention merging (GAM) module. Specifically, based on learnable semantic queries, the cascaded SSCA module iteratively enhances the spatial localization of attribute-related regions and aggregates region features into multiple disentangled attribute features, used for classification and updating learnable semantic queries. The GAM module splits attributes into groups based on spatial distribution and utilizes reliable group attention to supervise query attention maps. Experiments on PETA, RAPv1, PA100k, and RAPv2 show that the proposed method performs favorably against state-of-the-art methods. ER -