Label Smoothing for Emotion Detection (Student Abstract)

Authors

  • George Maratos University of Illinois at Chicago
  • Tiberiu Sosea University of Illinois at Chicago
  • Cornelia Caragea University of Illinois at Chicago

DOI:

https://doi.org/10.1609/aaai.v37i13.27001

Keywords:

Emotion Detection, Label Smoothing, Calibration

Abstract

Automatically detecting emotions from text has countless applications, ranging from large scale opinion mining to social robots in healthcare and education. However, emotions are subjective in nature and are often expressed in ambiguous ways. At the same time, detecting emotions can also require implicit reasoning, which may not be available as surface- level, lexical information. In this work, we conjecture that the overconfidence of pre-trained language models such as BERT is a critical problem in emotion detection and show that alleviating this problem can considerably improve the generalization performance. We carry out comprehensive experiments on four emotion detection benchmark datasets and show that calibrating our model predictions leads to an average improvement of 1.35% in weighted F1 score.

Downloads

Published

2024-07-15

How to Cite

Maratos, G., Sosea, T., & Caragea, C. (2024). Label Smoothing for Emotion Detection (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16282-16283. https://doi.org/10.1609/aaai.v37i13.27001