RankDNN: Learning to Rank for Few-Shot Learning
DOI:
https://doi.org/10.1609/aaai.v37i1.25150Keywords:
CV: Other Foundations of Computer Vision, ML: Learning Preferences or Rankings, ML: Meta Learning, ML: Transfer, Domain Adaptation, Multi-Task LearningAbstract
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification. In comparison to image classification, ranking relation classification is sample efficient and domain agnostic. Besides, it provides a new perspective on few-shot learning and is complementary to state-of-the-art methods. The core component of our deep neural network is a simple MLP, which takes as input an image triplet encoded as the difference between two vector-Kronecker products, and outputs a binary relevance ranking order. The proposed RankMLP can be built on top of any state-of-the-art feature extractors, and our entire deep neural network is called the ranking deep neural network, or RankDNN. Meanwhile, RankDNN can be flexibly fused with other post-processing methods. During the meta test, RankDNN ranks support images according to their similarity with the query samples, and each query sample is assigned the class label of its nearest neighbor. Experiments demonstrate that RankDNN can effectively improve the performance of its baselines based on a variety of backbones and it outperforms previous state-of-the-art algorithms on multiple few-shot learning benchmarks, including miniImageNet, tieredImageNet, Caltech-UCSD Birds, and CIFAR-FS. Furthermore, experiments on the cross-domain challenge demonstrate the superior transferability of RankDNN.The code is available at: https://github.com/guoqianyu-alberta/RankDNN.Downloads
Published
2023-06-26
How to Cite
Guo, Q., Haotong, G., Wei, X., Fu, Y., Yu, Y., Zhang, W., & Ge, W. (2023). RankDNN: Learning to Rank for Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 728-736. https://doi.org/10.1609/aaai.v37i1.25150
Issue
Section
AAAI Technical Track on Computer Vision I