Online Class-Incremental Continual Learning with Adversarial Shapley Value

Authors

  • Dongsub Shim University of Toronto
  • Zheda Mai University of Toronto
  • Jihwan Jeong University of Toronto
  • Scott Sanner University of Toronto
  • Hyunwoo Kim LG AI Research
  • Jongseong Jang LG AI Research

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

As image-based deep learning becomes pervasive on every device, from cell phones to smart watches, there is a growing need to develop methods that continually learn from data while minimizing memory footprint and power consumption. While memory replay techniques have shown exceptional promise for this task of continual learning, the best method for selecting which buffered images to replay is still an open question. In this paper, we specifically focus on the online class-incremental setting where a model needs to learn new classes continually from an online data stream. To this end, we contribute a novel Adversarial Shapley value scoring method that scores memory data samples according to their ability to preserve latent decision boundaries for previously observed classes (to maintain learning stability and avoid forgetting) while interfering with latent decision boundaries of current classes being learned (to encourage plasticity and optimal learning of new class boundaries). Overall, we observe that our proposed ASER method provides competitive or improved performance compared to state-of-the-art replay-based continual learning methods on a variety of datasets.

Downloads

Published

2021-05-18

How to Cite

Shim, D., Mai, Z., Jeong, J., Sanner, S., Kim, H., & Jang, J. (2021). Online Class-Incremental Continual Learning with Adversarial Shapley Value. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9630-9638. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17159

Issue

Section

AAAI Technical Track on Machine Learning IV