One-Shot Image Classification by Learning to Restore Prototypes

Authors

  • Wanqi Xue National University of Singapore
  • Wei Wang National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v34i04.6130

Abstract

One-shot image classification aims to train image classifiers over the dataset with only one image per category. It is challenging for modern deep neural networks that typically require hundreds or thousands of images per class. In this paper, we adopt metric learning for this problem, which has been applied for few- and many-shot image classification by comparing the distance between the test image and the center of each class in the feature space. However, for one-shot learning, the existing metric learning approaches would suffer poor performance because the single training image may not be representative of the class. For example, if the image is far away from the class center in the feature space, the metric-learning based algorithms are unlikely to make correct predictions for the test images because the decision boundary is shifted by this noisy image. To address this issue, we propose a simple yet effective regression model, denoted by RestoreNet, which learns a class agnostic transformation on the image feature to move the image closer to the class center in the feature space. Experiments demonstrate that RestoreNet obtains superior performance over the state-of-the-art methods on a broad range of datasets. Moreover, RestoreNet can be easily combined with other methods to achieve further improvement.

Downloads

Published

2020-04-03

How to Cite

Xue, W., & Wang, W. (2020). One-Shot Image Classification by Learning to Restore Prototypes. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6558-6565. https://doi.org/10.1609/aaai.v34i04.6130

Issue

Section

AAAI Technical Track: Machine Learning