Class Incremental Learning for Task-Oriented Dialogue System with Contrastive Distillation on Internal Representations (Student Abstract)

Authors

  • Qiancheng Xu Georgia Institute of Technology
  • Min Yang Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
  • Binzong Geng University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v37i13.27044

Keywords:

Continual Learning, Contrastive Learning, Task-oriented Dialogue System, Memory Replay

Abstract

The ability to continually learn over time by grasping new knowledge and remembering previously learned experiences is essential for developing an online task-oriented dialogue system (TDS). In this paper, we work on the class incremental learning scenario where the TDS is evaluated without specifying the dialogue domain. We employ contrastive distillation on the intermediate representations of dialogues to learn transferable representations that suffer less from catastrophic forgetting. Besides, we provide a dynamic update mechanism to explicitly preserve the learned experiences by only updating the parameters related to the new task while keeping other parameters fixed. Extensive experiments demonstrate that our method significantly outperforms the strong baselines.

Downloads

Published

2023-09-06

How to Cite

Xu, Q., Yang, M., & Geng, B. (2023). Class Incremental Learning for Task-Oriented Dialogue System with Contrastive Distillation on Internal Representations (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16368-16369. https://doi.org/10.1609/aaai.v37i13.27044