Who Did They Respond to? Conversation Structure Modeling Using Masked Hierarchical Transformer

Authors

  • Henghui Zhu Boston University
  • Feng Nan AWS AI Labs
  • Zhiguo Wang AWS AI Labs
  • Ramesh Nallapati AWS AI Labs
  • Bing Xiang AWS AI Labs

DOI:

https://doi.org/10.1609/aaai.v34i05.6524

Abstract

Conversation structure is useful for both understanding the nature of conversation dynamics and for providing features for many downstream applications such as summarization of conversations. In this work, we define the problem of conversation structure modeling as identifying the parent utterance(s) to which each utterance in the conversation responds to. Previous work usually took a pair of utterances to decide whether one utterance is the parent of the other. We believe the entire ancestral history is a very important information source to make accurate prediction. Therefore, we design a novel masking mechanism to guide the ancestor flow, and leverage the transformer model to aggregate all ancestors to predict parent utterances. Our experiments are performed on the Reddit dataset (Zhang, Culbertson, and Paritosh 2017) and the Ubuntu IRC dataset (Kummerfeld et al. 2019). In addition, we also report experiments on a new larger corpus from the Reddit platform and release this dataset. We show that the proposed model, that takes into account the ancestral history of the conversation, significantly outperforms several strong baselines including the BERT model on all datasets.

Downloads

Published

2020-04-03

How to Cite

Zhu, H., Nan, F., Wang, Z., Nallapati, R., & Xiang, B. (2020). Who Did They Respond to? Conversation Structure Modeling Using Masked Hierarchical Transformer. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9741-9748. https://doi.org/10.1609/aaai.v34i05.6524

Issue

Section

AAAI Technical Track: Natural Language Processing