Looking Wider for Better Adaptive Representation in Few-Shot Learning
AbstractBuilding a good feature space is essential for the metric-based few-shot algorithms to recognize a novel class with only a few samples. The feature space is often built by Convolutional Neural Networks (CNNs). However, CNNs primarily focus on local information with the limited receptive field, and the global information generated by distant pixels is not well used. Meanwhile, having a global understanding of the current task and focusing on distinct regions of the same sample for different queries are important for the few-shot classification. To tackle these problems, we propose the Cross Non-Local Neural Network (CNL) for capturing the long-range dependency of the samples and the current task. CNL extracts the task-specific and context-aware features dynamically by strengthening the features of the sample at a position via aggregating information from all positions of itself and the current task. To reduce losing important information, we maximize the mutual information between the original and refined features as a constraint. Moreover, we add a task-specific scaling to deal with multi-scale and task-specific features extracted by CNL. We conduct extensive experiments for validating our proposed algorithm, which achieves new state-of-the-art performances on two public benchmarks.
How to Cite
Zhao, J., Yang, Y., Lin, X., Yang, J., & He, L. (2021). Looking Wider for Better Adaptive Representation in Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10981-10989. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17311
AAAI Technical Track on Machine Learning V