Attributes-Guided and Pure-Visual Attention Alignment for Few-Shot Recognition

Authors

  • Siteng Huang Zhejiang University Machine Intelligence Lab (MiLAB), AI Division, School of Engineering, Westlake University
  • Min Zhang Machine Intelligence Lab (MiLAB), AI Division, School of Engineering, Westlake University
  • Yachen Kang Machine Intelligence Lab (MiLAB), AI Division, School of Engineering, Westlake University
  • Donglin Wang Machine Intelligence Lab (MiLAB), AI Division, School of Engineering, Westlake University

DOI:

https://doi.org/10.1609/aaai.v35i9.16957

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

The purpose of few-shot recognition is to recognize novel categories with a limited number of labeled examples in each class. To encourage learning from a supplementary view, recent approaches have introduced auxiliary semantic modalities into effective metric-learning frameworks that aim to learn a feature similarity between training samples (support set) and test samples (query set). However, these approaches only augment the representations of samples with available semantics while ignoring the query set, which loses the potential for the improvement and may lead to a shift between the modalities combination and the pure-visual representation. In this paper, we devise an attributes-guided attention module (AGAM) to utilize human-annotated attributes and learn more discriminative features. This plug-and-play module enables visual contents and corresponding attributes to collectively focus on important channels and regions for the support set. And the feature selection is also achieved for query set with only visual information while the attributes are not available. Therefore, representations from both sets are improved in a fine-grained manner. Moreover, an attention alignment mechanism is proposed to distill knowledge from the guidance of attributes to the pure-visual branch for samples without attributes. Extensive experiments and analysis show that our proposed module can significantly improve simple metric-based approaches to achieve state-of-the-art performance on different datasets and settings.

Downloads

Published

2021-05-18

How to Cite

Huang, S., Zhang, M., Kang, Y., & Wang, D. (2021). Attributes-Guided and Pure-Visual Attention Alignment for Few-Shot Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7840-7847. https://doi.org/10.1609/aaai.v35i9.16957

Issue

Section

AAAI Technical Track on Machine Learning II