M2SD:Multiple Mixing Self-Distillation for Few-Shot Class-Incremental Learning

Authors

  • Jinhao Lin South China University of Technology Alibaba Group
  • Ziheng Wu Alibaba Group
  • Weifeng Lin South China University of Technology Alibaba Group
  • Jun Huang Alibaba Group
  • RongHua Luo South China University of Technology

DOI:

https://doi.org/10.1609/aaai.v38i4.28129

Keywords:

CV: Representation Learning for Vision

Abstract

Few-shot Class-incremental learning (FSCIL) is a challenging task in machine learning that aims to recognize new classes from a limited number of instances while preserving the ability to classify previously learned classes without retraining the entire model. This presents challenges in updating the model with new classes using limited training data, particularly in balancing acquiring new knowledge while retaining the old. We propose a novel method named Multiple Mxing Self-Distillation (M2SD) during the training phase to address these issues. Specifically, we propose a dual-branch structure that facilitates the expansion of the entire feature space to accommodate new classes. Furthermore, we introduce a feature enhancement component that can pass additional enhanced information back to the base network by self-distillation, resulting in improved classification performance upon adding new classes. After training, we discard both structures, leaving only the primary network to classify new class instances. Extensive experiments demonstrate that our approach achieves superior performance over previous state-of-the-art methods.

Published

2024-03-24

How to Cite

Lin, J., Wu, Z., Lin, W., Huang, J., & Luo, R. (2024). M2SD:Multiple Mixing Self-Distillation for Few-Shot Class-Incremental Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3422-3431. https://doi.org/10.1609/aaai.v38i4.28129

Issue

Section

AAAI Technical Track on Computer Vision III