Looking Wider for Better Adaptive Representation in Few-Shot Learning

Authors

  • Jiabao Zhao Shanghai Key Laboratory of Multidimensional Information Processing, ECNU, Shanghai, China School of Computer Science and Technology, East China Normal University, Shanghai, China
  • Yifan Yang Transwarp Technology (Shanghai) Co., Ltd, China
  • Xin Lin Shanghai Key Laboratory of Multidimensional Information Processing, ECNU, Shanghai, China School of Computer Science and Technology, East China Normal University, Shanghai, China
  • Jing Yang School of Computer Science and Technology, East China Normal University, Shanghai, China
  • Liang He Shanghai Key Laboratory of Multidimensional Information Processing, ECNU, Shanghai, China School of Computer Science and Technology, East China Normal University, Shanghai, China

DOI:

https://doi.org/10.1609/aaai.v35i12.17311

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

Building a good feature space is essential for the metric-based few-shot algorithms to recognize a novel class with only a few samples. The feature space is often built by Convolutional Neural Networks (CNNs). However, CNNs primarily focus on local information with the limited receptive field, and the global information generated by distant pixels is not well used. Meanwhile, having a global understanding of the current task and focusing on distinct regions of the same sample for different queries are important for the few-shot classification. To tackle these problems, we propose the Cross Non-Local Neural Network (CNL) for capturing the long-range dependency of the samples and the current task. CNL extracts the task-specific and context-aware features dynamically by strengthening the features of the sample at a position via aggregating information from all positions of itself and the current task. To reduce losing important information, we maximize the mutual information between the original and refined features as a constraint. Moreover, we add a task-specific scaling to deal with multi-scale and task-specific features extracted by CNL. We conduct extensive experiments for validating our proposed algorithm, which achieves new state-of-the-art performances on two public benchmarks.

Downloads

Published

2021-05-18

How to Cite

Zhao, J., Yang, Y., Lin, X., Yang, J., & He, L. (2021). Looking Wider for Better Adaptive Representation in Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10981-10989. https://doi.org/10.1609/aaai.v35i12.17311

Issue

Section

AAAI Technical Track on Machine Learning V