AnnoBERT: Effectively Representing Multiple Annotators’ Label Choices to Improve Hate Speech Detection


  • Wenjie Yin Queen Mary University of London
  • Vibhor Agarwal University of Surrey
  • Aiqi Jiang Queen Mary University of London
  • Arkaitz Zubiaga Queen Mary University of London
  • Nishanth Sastry University of Surrey



Web and Social Media, Subjectivity in textual data; sentiment analysis; polarity/opinion identification and extraction, linguistic analyses of social media behavior, Text categorization; topic recognition; demographic/gender/age identification


Supervised machine learning approaches often rely on a "ground truth" label. However, obtaining one label through majority voting ignores the important subjectivity information in tasks such hate speech detection. Existing neural network models principally regard labels as categorical variables, while ignoring the semantic information in diverse label texts. In this paper, we propose AnnoBERT, a first-of-its-kind architecture integrating annotator characteristics and label text with a transformer-based model to detect hate speech, with unique representations based on each annotator's characteristics via Collaborative Topic Regression (CTR) and integrate label text to enrich textual representations. During training, the model associates annotators with their label choices given a piece of text; during evaluation, when label information is not available, the model predicts the aggregated label given by the participating annotators by utilising the learnt association. The proposed approach displayed an advantage in detecting hate speech, especially in the minority class and edge cases with annotator disagreement. Improvement in the overall performance is the largest when the dataset is more label-imbalanced, suggesting its practical value in identifying real-world hate speech, as the volume of hate speech in-the-wild is extremely small on social media, when compared with normal (non-hate) speech. Through ablation studies, we show the relative contributions of annotator embeddings and label text to the model performance, and tested a range of alternative annotator embeddings and label text combinations.




How to Cite

Yin, W., Agarwal, V., Jiang, A., Zubiaga, A., & Sastry, N. (2023). AnnoBERT: Effectively Representing Multiple Annotators’ Label Choices to Improve Hate Speech Detection. Proceedings of the International AAAI Conference on Web and Social Media, 17(1), 902-913.