Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection

Authors

  • Sohail Akhtar University of Turin
  • Valerio Basile University of Turin
  • Viviana Patti University of Turin

DOI:

https://doi.org/10.1609/hcomp.v8i1.7473

Abstract

In this paper we propose an approach to exploit the fine-grained knowledge expressed by individual human annotators during a hate speech (HS) detection task, before the aggregation of single judgments in a gold standard dataset eliminates non-majority perspectives. We automatically divide the annotators into groups, aiming at grouping them by similar personal characteristics (ethnicity, social background, culture etc.). To serve a multi-lingual perspective, we performed classification experiments on three different Twitter datasets in English and Italian languages. We created different gold standards, one for each group, and trained a state-of-the-art deep learning model on them, showing that supervised models informed by different perspectives on the target phenomena outperform a baseline represented by models trained on fully aggregated data. Finally, we implemented an ensemble approach that combines the single perspective-aware classifiers into an inclusive model. The results show that this strategy further improves the classification performance, especially with a significant boost in the recall of HS prediction.

Downloads

Published

2020-10-01

How to Cite

Akhtar, S., Basile, V., & Patti, V. (2020). Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 8(1), 151-154. https://doi.org/10.1609/hcomp.v8i1.7473