SpatialFormer: Semantic and Target Aware Attentions for Few-Shot Learning

Authors

  • Jinxiang Lai Tencent
  • Siqian Yang Tencent
  • Wenlong Wu Tencent
  • Tao Wu Tencent
  • Guannan Jiang CATL
  • Xi Wang CATL
  • Jun Liu Tencent
  • Bin-Bin Gao Tencent
  • Wei Zhang CATL
  • Yuan Xie East China Normal University
  • Chengjie Wang Tencent; Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v37i7.26016

Keywords:

ML: Classification and Regression, CV: Object Detection & Categorization, CV: Representation Learning for Vision, ML: Meta Learning

Abstract

Recent Few-Shot Learning (FSL) methods put emphasis on generating a discriminative embedding features to precisely measure the similarity between support and query sets. Current CNN-based cross-attention approaches generate discriminative representations via enhancing the mutually semantic similar regions of support and query pairs. However, it suffers from two problems: CNN structure produces inaccurate attention map based on local features, and mutually similar backgrounds cause distraction. To alleviate these problems, we design a novel SpatialFormer structure to generate more accurate attention regions based on global features. Different from the traditional Transformer modeling intrinsic instance-level similarity which causes accuracy degradation in FSL, our SpatialFormer explores the semantic-level similarity between pair inputs to boost the performance. Then we derive two specific attention modules, named SpatialFormer Semantic Attention (SFSA) and SpatialFormer Target Attention (SFTA), to enhance the target object regions while reduce the background distraction. Particularly, SFSA highlights the regions with same semantic information between pair features, and SFTA finds potential foreground object regions of novel feature that are similar to base categories. Extensive experiments show that our methods are effective and achieve new state-of-the-art results on few-shot classification benchmarks.

Downloads

Published

2023-06-26

How to Cite

Lai, J., Yang, S., Wu, W., Wu, T., Jiang, G., Wang, X., Liu, J., Gao, B.-B., Zhang, W., Xie, Y., & Wang, C. (2023). SpatialFormer: Semantic and Target Aware Attentions for Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8430-8437. https://doi.org/10.1609/aaai.v37i7.26016

Issue

Section

AAAI Technical Track on Machine Learning II