Reasoning in Dialog: Improving Response Generation by Context Reading Comprehension

Authors

  • Xiuying Chen Wangxuan Institute of Computer Technology, Peking University,Beijing,China Center for Data Science, AAIS, Peking University,Beijing,China
  • Zhi Cui Xiaomi AI Lab
  • Jiayi Zhang Xiaomi AI Lab
  • Chen Wei Xiaomi AI Lab
  • Jianwei Cui Xiaomi AI Lab.
  • Bin Wang Xiaomi AI Lab
  • Dongyan Zhao Wangxuan Institute of Computer Technology, Peking University,Beijing,China Center for Data Science, AAIS, Peking University,Beijing,China
  • Rui Yan Gaoling School of Artificial Intelligence, Renmin University of China Beijing Academy of Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v35i14.17502

Keywords:

Conversational AI/Dialog Systems

Abstract

In multi-turn dialog, utterances do not always take the full form of sentences (Carbonell 1983), which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies these two tasks, sharing the same encoder to extract the common and task-invariant features with different decoders to learn task-specific features. To better fusing information from the question and the dialog history in the encoding part, we propose to augment the Transformer architecture with a memory updater, which is designed to selectively store and update the history dialog information so as to support downstream tasks. For the experiment, we employ human annotators to write and examine a large-scale dialog reading comprehension dataset. Extensive experiments are conducted on this dataset, and the results show that the proposed model brings substantial improvements over several strong baselines on both tasks. In this way, we demonstrate that reasoning can indeed help better response generation and vice versa. We release our large-scale dataset for further research.

Downloads

Published

2021-05-18

How to Cite

Chen, X., Cui, Z., Zhang, J., Wei, C., Cui, J., Wang, B., Zhao, D., & Yan, R. (2021). Reasoning in Dialog: Improving Response Generation by Context Reading Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12683-12691. https://doi.org/10.1609/aaai.v35i14.17502

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I