An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

Authors

  • Peixiang Zhong Nanyang Technological University
  • Di Wang Nanyang Technological University
  • Chunyan Miao Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v33i01.33017492

Abstract

Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an endto-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses.

Downloads

Published

2019-07-17

How to Cite

Zhong, P., Wang, D., & Miao, C. (2019). An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7492-7500. https://doi.org/10.1609/aaai.v33i01.33017492

Issue

Section

AAAI Technical Track: Natural Language Processing