RankDNN: Learning to Rank for Few-Shot Learning

Authors

  • Qianyu Guo Nebula AI Group, School of Computer Science, Fudan University,Shanghai,China Shanghai Key Laboratory of Intelligent Information Processing,Shanghai,China
  • Gong Haotong Nebula AI Group, School of Computer Science, Fudan University,Shanghai,China
  • Xujun Wei Nebula AI Group, School of Computer Science, Fudan University,Shanghai,China Academy for Engineering & Technology, Fudan University,Shanghai,China
  • Yanwei Fu Shanghai Key Laboratory of Intelligent Information Processing,Shanghai,China
  • Yizhou Yu Department of Computer Science, The University of Hong Kong,Hong Kong,China
  • Wenqiang Zhang Shanghai Key Laboratory of Intelligent Information Processing,Shanghai,China Academy for Engineering & Technology, Fudan University,Shanghai,China
  • Weifeng Ge Nebula AI Group, School of Computer Science, Fudan University,Shanghai,China Shanghai Key Laboratory of Intelligent Information Processing,Shanghai,China

DOI:

https://doi.org/10.1609/aaai.v37i1.25150

Keywords:

CV: Other Foundations of Computer Vision, ML: Learning Preferences or Rankings, ML: Meta Learning, ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.

Downloads

Published

2023-06-26

How to Cite

Guo, Q., Haotong, G., Wei, X., Fu, Y., Yu, Y., Zhang, W., & Ge, W. (2023). RankDNN: Learning to Rank for Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 728-736. https://doi.org/10.1609/aaai.v37i1.25150

Issue

Section

AAAI Technical Track on Computer Vision I