Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines

Authors

  • Yangming Li Harbin Institute of Technology
  • Kaisheng Yao Ant Group

Keywords:

Conversational AI/Dialog Systems, Interpretaility & Analysis of NLP Models, Generation

Abstract

End-to-end neural networks have achieved promising performances in natural language generation (NLG). However, they are treated as black boxes and lack interpretability. To address this problem, we propose a novel framework, heterogeneous rendering machines (HRM), that interprets how neural generators render an input dialogue act (DA) into an utterance. HRM consists of a renderer set and a mode switcher. The renderer set contains multiple decoders that vary in both structure and functionality. For every generation step, the mode switcher selects an appropriate decoder from the renderer set to generate an item (a word or a phrase). To verify the effectiveness of our method, we have conducted extensive experiments on 5 benchmark datasets. In terms of automatic metrics (e.g., BLEU), our model is competitive with the current state-of-the-art method. The qualitative analysis shows that our model can interpret the rendering process of neural generators well. Human evaluation also confirms the interpretability of our proposed approach.

Downloads

Published

2021-05-18

How to Cite

Li, Y., & Yao, K. (2021). Interpretable NLG for Task-oriented Dialogue Systems with Heterogeneous Rendering Machines. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13306-13314. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17571

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II