Supporting Online Toxicity Detection with Knowledge Graphs
Keywords:Web and Social Media
AbstractDue to the rise in toxic speech on social media and other online platforms, there is a growing need for systems that could automatically flag or filter such content. Various supervised machine learning approaches have been proposed, trained from manually-annotated toxic speech corpora. However, annotators sometimes struggle to judge or to agree on which text is toxic and which group is being targeted in a given text. This could be due to bias, subjectivity, or unfamiliarity with used terminology (e.g. domain language, slang). In this paper, we propose the use of a knowledge graph to help in better understanding such toxic speech annotation issues. Our empirical results show that 3% in a sample of 19k texts mention terms associated with frequently attacked gender and sexual orientation groups that were not correctly identified by the annotators.
How to Cite
Lobo, P. R., Daga, E., & Alani, H. (2022). Supporting Online Toxicity Detection with Knowledge Graphs. Proceedings of the International AAAI Conference on Web and Social Media, 16(1), 1414-1418. Retrieved from https://ojs.aaai.org/index.php/ICWSM/article/view/19398