Exploring Auxiliary Reasoning Tasks for Task-oriented Dialog Systems with Meta Cooperative Learning
AbstractIn this paper, we propose a Meta Cooperative Learning (MCL) framework for task-oriented dialog systems (TDSs). Our model consists of an auxiliary KB reasoning task for learning meta KB knowledge, an auxiliary dialogue reasoning task for learning dialogue patterns, and a TDS task (primary task) that aims at not only retrieving accurate entities from KB but also generating natural responses, which are coordinated to achieve collective success in both retrieving accurate KB entities and generating human-like responses via meta learning. Concretely, the dialog generation model amalgamates complementary meta KB and dialog knowledge from two novel auxiliary reasoning tasks that together provide integrated guidance to build a high-quality TDS by adding regularization terms to force primary network to produce similar results to auxiliary networks. While MCL automatically learns appropriate labels for the two auxiliary reasoning tasks from the primary task, without requiring access to any further data. The key idea behind MCL is to use the performance of the primary task, which is trained alongside the auxiliary tasks in one iteration, to improve the auxiliary labels for the next iteration with meta learning. Experimental results on three benchmark datasets show that MCL can generate higher quality responses compared to several strong baselines in terms of both automatic and human evaluations. Code to reproduce the results in this paper is available at: https://github.com/siat-nlp/MCL.
How to Cite
Qin, B., Yang, M., Bing, L., Jiang, Q., Li, C., & Xu, R. (2021). Exploring Auxiliary Reasoning Tasks for Task-oriented Dialog Systems with Meta Cooperative Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13701-13708. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17615
AAAI Technical Track on Speech and Natural Language Processing II