Watch Your Language: Investigating Content Moderation with Large Language Models

Authors

  • Deepak Kumar Stanford University University of California, San Diego
  • Yousef Anees AbuHashem Stanford University
  • Zakir Durumeric Stanford University

DOI:

https://doi.org/10.1609/icwsm.v18i1.31358

Abstract

Large language models (LLMs) have exploded in popularity due to their ability to perform a wide array of natural language tasks. Text-based content moderation is one LLM use case that has received recent enthusiasm, however, there is little research investigating how LLMs can help in content moderation settings. In this work, we evaluate a suite of commodity LLMs on two common content moderation tasks: rule-based community moderation and toxic content detection. For rule-based community moderation, we instantiate 95 subcommunity specific LLMs by prompting GPT-3.5 with rules from 95 Reddit subcommunities. We find that GPT-3.5 is effective at rule-based moderation for many communities, achieving a median accuracy of 64% and a median precision of 83%. For toxicity detection, we evaluate a range of LLMs (GPT-3, GPT-3.5, GPT-4, Gemini Pro, LLAMA 2) and show that LLMs significantly outperform currently widespread toxicity classifiers. However, we also found that increases in model size add only marginal benefit to toxicity detection, suggesting a potential performance plateau for LLMs on toxicity detection tasks. We conclude by outlining avenues for future work in studying LLMs and content moderation.

Downloads

Published

2024-05-28

How to Cite

Kumar, D., AbuHashem, Y. A., & Durumeric, Z. (2024). Watch Your Language: Investigating Content Moderation with Large Language Models. Proceedings of the International AAAI Conference on Web and Social Media, 18(1), 865-878. https://doi.org/10.1609/icwsm.v18i1.31358