Supporting Online Toxicity Detection with Knowledge Graphs

Authors

  • Paula Reyero Lobo Knowledge Media Institute, The Open University, UK
  • Enrico Daga Knowledge Media Institute, The Open University, UK
  • Harith Alani Knowledge Media Institute, The Open University, UK

DOI:

https://doi.org/10.1609/icwsm.v16i1.19398

Keywords:

Web and Social Media

Abstract

Due to the rise in toxic speech on social media and other online platforms, there is a growing need for systems that could automatically flag or filter such content. Various supervised machine learning approaches have been proposed, trained from manually-annotated toxic speech corpora. However, annotators sometimes struggle to judge or to agree on which text is toxic and which group is being targeted in a given text. This could be due to bias, subjectivity, or unfamiliarity with used terminology (e.g. domain language, slang). In this paper, we propose the use of a knowledge graph to help in better understanding such toxic speech annotation issues. Our empirical results show that 3% in a sample of 19k texts mention terms associated with frequently attacked gender and sexual orientation groups that were not correctly identified by the annotators.

Downloads

Published

2022-05-31

How to Cite

Lobo, P. R., Daga, E., & Alani, H. (2022). Supporting Online Toxicity Detection with Knowledge Graphs. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1414-1418. https://doi.org/10.1609/icwsm.v16i1.19398