Multi-Domain Multi-Task Rehearsal for Lifelong Learning

Authors

  • Fan Lyu College of Intelligence and Computing, Tianjin University
  • Shuai Wang College of Intelligence and Computing, Tianjin University
  • Wei Feng College of Intelligence and Computing, Tianjin University
  • Zihan Ye School of Electronic & Information Engineering, Suzhou University of Science and Technology
  • Fuyuan Hu School of Electronic & Information Engineering, Suzhou University of Science and Technology
  • Song Wang Department of Computer Science and Engineering, University of South Carolina

DOI:

https://doi.org/10.1609/aaai.v35i10.17068

Keywords:

(Deep) Neural Network Algorithms, Time-Series/Data Streams, Classification and Regression

Abstract

Rehearsal, seeking to remind the model by storing old knowledge in lifelong learning, is one of the most effective ways to mitigate catastrophic forgetting, i.e., biased forgetting of previous knowledge when moving to new tasks. However, the old tasks of the most previous rehearsal-based methods suffer from the unpredictable domain shift when training the new task. This is because these methods always ignore two significant factors. First, the Data Imbalance between the new task and old tasks that makes the domain of old tasks prone to shift. Second, the Task Isolation among all tasks will make the domain shift toward unpredictable directions; To address the unpredictable domain shift, in this paper, we propose Multi-Domain Multi-Task (MDMT) rehearsal to train the old tasks and new task parallelly and equally to break the isolation among tasks. Specifically, a two-level angular margin loss is proposed to encourage the intra-class/task compactness and inter-class/task discrepancy, which keeps the model from domain chaos. In addition, to further address domain shift of the old tasks, we propose an optional episodic distillation loss on the memory to anchor the knowledge for each old task. Experiments on benchmark datasets validate the proposed approach can effectively mitigate the unpredictable domain shift.

Downloads

Published

2021-05-18

How to Cite

Lyu, F., Wang, S., Feng, W., Ye, Z., Hu, F., & Wang, S. (2021). Multi-Domain Multi-Task Rehearsal for Lifelong Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8819-8827. https://doi.org/10.1609/aaai.v35i10.17068

Issue

Section

AAAI Technical Track on Machine Learning III