Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning

Authors

  • Zhiqiang Shen Carnegie Mellon University
  • Zechun Liu Hong Kong University of Science and Technology
  • Jie Qin Inception Institute of Artificial Intelligence
  • Marios Savvides Carnegie Mellon University
  • Kwang-Ting Cheng Hong Kong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v35i11.17155

Keywords:

Representation Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning, Object Detection & Categorization, Learning & Optimization for CV

Abstract

The goal of few-shot learning is to learn a classifier that can recognize unseen classes from limited support data with labels. A common practice for this task is to train a model on the base set first and then transfer to novel classes through fine-tuning or meta-learning. However, as the base classes have no overlap to the novel set, simply transferring whole knowledge from base data is not an optimal solution since some knowledge in the base model may be biased or even harmful to the novel class. In this paper, we propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model. Specifically, layers will be imposed different learning rates if they are chosen to be fine-tuned, to control the extent of preserved transferability. To determine which layers to be recast and what values of learning rates for them, we introduce an evolutionary search based method that is efficient to simultaneously locate the target layers and determine their individual learning rates. We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method. It achieves the state-of-the-art performance on both meta-learning and non-meta based frameworks. Furthermore, we extend our method to the conventional pre-training + fine-tuning paradigm and obtain consistent improvement.

Downloads

Published

2021-05-18

How to Cite

Shen, Z., Liu, Z., Qin, J., Savvides, M., & Cheng, K.-T. (2021). Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9594-9602. https://doi.org/10.1609/aaai.v35i11.17155

Issue

Section

AAAI Technical Track on Machine Learning IV