Contrast and Generation Make BART a Good Dialogue Emotion Recognizer


  • Shimin Li Fudan University
  • Hang Yan Fudan University
  • Xipeng Qiu Fudan University Peng Cheng Laboratory



Speech & Natural Language Processing (SNLP)


In dialogue systems, utterances with similar semantics may have distinctive emotions under different contexts. Therefore, modeling long-range contextual emotional relationships with speaker dependency plays a crucial part in dialogue emotion recognition. Meanwhile, distinguishing the different emotion categories is non-trivial since they usually have semantically similar sentiments. To this end, we adopt supervised contrastive learning to make different emotions mutually exclusive to identify similar emotions better. Meanwhile, we utilize an auxiliary response generation task to enhance the model's ability of handling context information, thereby forcing the model to recognize emotions with similar semantics in diverse contexts. To achieve these objectives, we use the pre-trained encoder-decoder model BART as our backbone model since it is very suitable for both understanding and generation tasks. The experiments on four datasets demonstrate that our proposed model obtains significantly more favorable results than the state-of-the-art model in dialogue emotion recognition. The ablation study further demonstrates the effectiveness of supervised contrastive loss and generative loss.




How to Cite

Li, S., Yan, H., & Qiu, X. (2022). Contrast and Generation Make BART a Good Dialogue Emotion Recognizer. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11002-11010.



AAAI Technical Track on Speech and Natural Language Processing