Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking
Keywords:SNLP: Conversational AI/Dialogue Systems
AbstractIn dialogue state tracking (DST), the exploitation of dialogue history is a crucial research direction, and the existing DST models can be divided into two categories: full-history models and partial-history models. Since the “select first, use later” mechanism explicitly filters the distracting information being passed to the downstream state prediction, the partial-history models have recently achieved a performance advantage over the full-history models. However, besides the redundant information, some critical dialogue context information was inevitably filtered out by the partial-history models simultaneously. To reconcile the contextual consideration with avoiding the introduction of redundant information, we propose DICE-DST, a model-agnostic module widely applicable to the partial-history DST models, which aims to strengthen the ability of context exploitation for the encoder of each DST model. Specifically, we first construct a teacher encoder and devise two contextual reasoning tasks to train it to acquire extensive dialogue contextual knowledge. Then we transfer the contextual knowledge from the teacher encoder to the student encoder via a novel turn-level attention-alignment distillation. Experimental results show that our approach extensively improves the performance of partial-history DST models and thereby achieves new state-of-the-art performance on multiple mainstream datasets while keeping high efficiency.
How to Cite
Guo, J., Shuang, K., Zhang, K., Liu, Y., Li, J., & Wang, Z. (2023). Learning to Imagine: Distillation-Based Interactive Context Exploitation for Dialogue State Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 12845-12853. https://doi.org/10.1609/aaai.v37i11.26510
AAAI Technical Track on Speech & Natural Language Processing