Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention

Authors

  • Kun Yan Peking University
  • Chenbin Zhang Peking University
  • Jun Hou SenseTime Group Limited
  • Ping Wang Peking University
  • Zied Bouraoui CRIL CNRS & Univ Artois
  • Shoaib Jameel University of Essex
  • Steven Schockaert Cardiff University

DOI:

https://doi.org/10.1609/aaai.v36i3.20205

Keywords:

Computer Vision (CV)

Abstract

Multi-label few-shot image classification (ML-FSIC) is the task of assigning descriptive labels to previously unseen images, based on a small number of training examples. A key feature of the multi-label setting is that images often have multiple labels, which typically refer to different regions of the image. When estimating prototypes, in a metric-based setting, it is thus important to determine which regions are relevant for which labels, but the limited amount of training data makes this highly challenging. As a solution, in this paper we propose to use word embeddings as a form of prior knowledge about the meaning of the labels. In particular, visual prototypes are obtained by aggregating the local feature maps of the support images, using an attention mechanism that relies on the label embeddings. As an important advantage, our model can infer prototypes for unseen labels without the need for fine-tuning any model parameters, which demonstrates its strong generalization abilities. Experiments on COCO and PASCAL VOC furthermore show that our model substantially improves the current state-of-the-art.

Downloads

Published

2022-06-28

How to Cite

Yan, K., Zhang, C., Hou, J., Wang, P., Bouraoui, Z., Jameel, S., & Schockaert, S. (2022). Inferring Prototypes for Multi-Label Few-Shot Image Classification with Word Vector Guided Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2991-2999. https://doi.org/10.1609/aaai.v36i3.20205

Issue

Section

AAAI Technical Track on Computer Vision III