Diversity Transfer Network for Few-Shot Learning

Authors

  • Mengting Chen Huazhong University of Science and Technology
  • Yuxin Fang Huazhong University of Science and Technology
  • Xinggang Wang Huazhong University of Science and Technology
  • Heng Luo Horizon Robotics, Inc.
  • Yifeng Geng Horizon Robotics, Inc.
  • Xinyu Zhang Independent Researcher
  • Chang Huang Horizon Robotics, Inc.
  • Wenyu Liu Huazhong University of Science and Technology
  • Bo Wang Vector Institute & PMCC, UHN

DOI:

https://doi.org/10.1609/aaai.v34i07.6628

Abstract

Few-shot learning is a challenging task that aims at training a classifier for unseen classes with only a few training examples. The main difficulty of few-shot learning lies in the lack of intra-class diversity within insufficient training samples. To alleviate this problem, we propose a novel generative framework, Diversity Transfer Network (DTN), that learns to transfer latent diversities from known categories and composite them with support features to generate diverse samples for novel categories in feature space. The learning problem of the sample generation (i.e., diversity transfer) is solved via minimizing an effective meta-classification loss in a single-stage network, instead of the generative loss in previous works. Besides, an organized auxiliary task co-training over known categories is proposed to stabilize the meta-training process of DTN. We perform extensive experiments and ablation studies on three datasets, i.e., miniImageNet, CIFAR100 and CUB. The results show that DTN, with single-stage training and faster convergence speed, obtains the state-of-the-art results among the feature generation based few-shot learning methods. Code and supplementary material are available at: https://github.com/Yuxin-CV/DTN.

Downloads

Published

2020-04-03

How to Cite

Chen, M., Fang, Y., Wang, X., Luo, H., Geng, Y., Zhang, X., Huang, C., Liu, W., & Wang, B. (2020). Diversity Transfer Network for Few-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10559-10566. https://doi.org/10.1609/aaai.v34i07.6628

Issue

Section

AAAI Technical Track: Vision