Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking

Authors

  • Jinyu Guo State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications School of Computer Science, Beijing University of Posts and Telecommunications
  • Kai Shuang State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications School of Computer Science, Beijing University of Posts and Telecommunications
  • Kaihang Zhang State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications School of Computer Science, Beijing University of Posts and Telecommunications
  • Yixuan Liu State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications School of Computer Science, Beijing University of Posts and Telecommunications
  • Jijie Li Beijing Academy of Artificial Intelligence, Beijing, China
  • Zihan Wang Graduate School of Information Science and Technology, The University of Tokyo

DOI:

https://doi.org/10.1609/aaai.v37i11.26510

Keywords:

SNLP: Conversational AI/Dialogue Systems

Abstract

In dialogue state tracking (DST), the exploitation of dialogue history is a crucial research direction, and the existing DST models can be divided into two categories: full-history models and partial-history models. Since the “select first, use later” mechanism explicitly filters the distracting information being passed to the downstream state prediction, the partial-history models have recently achieved a performance advantage over the full-history models. However, besides the redundant information, some critical dialogue context information was inevitably filtered out by the partial-history models simultaneously. To reconcile the contextual consideration with avoiding the introduction of redundant information, we propose DICE-DST, a model-agnostic module widely applicable to the partial-history DST models, which aims to strengthen the ability of context exploitation for the encoder of each DST model. Specifically, we first construct a teacher encoder and devise two contextual reasoning tasks to train it to acquire extensive dialogue contextual knowledge. Then we transfer the contextual knowledge from the teacher encoder to the student encoder via a novel turn-level attention-alignment distillation. Experimental results show that our approach extensively improves the performance of partial-history DST models and thereby achieves new state-of-the-art performance on multiple mainstream datasets while keeping high efficiency.

Downloads

Published

2023-06-26

How to Cite

Guo, J., Shuang, K., Zhang, K., Liu, Y., Li, J., & Wang, Z. (2023). Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12845-12853. https://doi.org/10.1609/aaai.v37i11.26510

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing