BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation
DOI:
https://doi.org/10.1609/aaai.v37i11.26582Keywords:
SNLP: Sentiment Analysis and Stylistic Analysis, SNLP: Conversational AI/Dialogue Systems, SNLP: Language Models, SNLP: Text ClassificationAbstract
Previous works on emotion recognition in conversation (ERC) follow a two-step paradigm, which can be summarized as first producing context-independent features via fine-tuning pretrained language models (PLMs) and then analyzing contextual information and dialogue structure information among the extracted features. However, we discover that this paradigm has several limitations. Accordingly, we propose a novel paradigm, i.e., exploring contextual information and dialogue structure information in the fine-tuning step, and adapting the PLM to the ERC task in terms of input text, classification structure, and training strategy. Furthermore, we develop our model BERT-ERC according to the proposed paradigm, which improves ERC performance in three aspects, namely suggestive text, fine-grained classification module, and two-stage training. Compared to existing methods, BERT-ERC achieves substantial improvement on four datasets, indicating its effectiveness and generalization capability. Besides, we also set up the limited resources scenario and the online prediction scenario to approximate real-world scenarios. Extensive experiments demonstrate that the proposed paradigm significantly outperforms the previous one and can be adapted to various scenes.Downloads
Published
2023-06-26
How to Cite
Qin, X., Wu, Z., Zhang, T., Li, Y., Luan, J., Wang, B., Wang, L., & Cui, J. (2023). BERT-ERC: Fine-Tuning BERT Is Enough for Emotion Recognition in Conversation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13492-13500. https://doi.org/10.1609/aaai.v37i11.26582
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing