TY - JOUR AU - Qin, Bowen AU - Yang, Min AU - Bing, Lidong AU - Jiang, Qingshan AU - Li, Chengming AU - Xu, Ruifeng PY - 2021/05/18 Y2 - 2024/03/29 TI - Exploring Auxiliary Reasoning Tasks for Task-oriented Dialog Systems with Meta Cooperative Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 15 SE - AAAI Technical Track on Speech and Natural Language Processing II DO - 10.1609/aaai.v35i15.17615 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17615 SP - 13701-13708 AB - In this paper, we propose a Meta Cooperative Learning (MCL) framework for task-oriented dialog systems (TDSs). Our model consists of an auxiliary KB reasoning task for learning meta KB knowledge, an auxiliary dialogue reasoning task for learning dialogue patterns, and a TDS task (primary task) that aims at not only retrieving accurate entities from KB but also generating natural responses, which are coordinated to achieve collective success in both retrieving accurate KB entities and generating human-like responses via meta learning. Concretely, the dialog generation model amalgamates complementary meta KB and dialog knowledge from two novel auxiliary reasoning tasks that together provide integrated guidance to build a high-quality TDS by adding regularization terms to force primary network to produce similar results to auxiliary networks. While MCL automatically learns appropriate labels for the two auxiliary reasoning tasks from the primary task, without requiring access to any further data. The key idea behind MCL is to use the performance of the primary task, which is trained alongside the auxiliary tasks in one iteration, to improve the auxiliary labels for the next iteration with meta learning. Experimental results on three benchmark datasets show that MCL can generate higher quality responses compared to several strong baselines in terms of both automatic and human evaluations. Code to reproduce the results in this paper is available at: https://github.com/siat-nlp/MCL. ER -