Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

Authors

  • Longxiang Liu Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
  • Zhuosheng Zhang Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
  • Hai Zhao Department of Computer Science and Engineering, Shanghai Jiao Tong University Key Laboratory of Shanghai Education Commission for Intelligent Interaction and Cognitive Engineering, Shanghai Jiao Tong University MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
  • Xi Zhou CloudWalk Technology
  • Xiang Zhou CloudWalk Technology

DOI:

https://doi.org/10.1609/aaai.v35i15.17582

Keywords:

Conversational AI/Dialog Systems

Abstract

A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles. Thus utterance- and speaker-aware clues are supposed to be well captured in models. However, in the existing retrieval-based multi-turn dialogue modeling, the pre-trained language models (PrLMs) as encoder represent the dialogues coarsely by taking the pairwise dialogue history and candidate response as a whole, the hierarchical information on either utterance interrelation or speaker roles coupled in such representations is not well addressed. In this work, we propose a novel model to fill such a gap by modeling the effective utterance-aware and speaker-aware representations entailed in a dialogue history. In detail, we decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, two speaker roles (i.e., utterances of sender and utterances of receiver), respectively. Experimental results show that our method boosts the strong ELECTRA baseline substantially in four public benchmark datasets, and achieves various new state-of-the-art performance over previous methods. A series of ablation studies are conducted to demonstrate the effectiveness of our method.

Downloads

Published

2021-05-18

How to Cite

Liu, L., Zhang, Z., Zhao, H., Zhou, X., & Zhou, X. (2021). Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13406-13414. https://doi.org/10.1609/aaai.v35i15.17582

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II