A Holistic Approach to Undesired Content Detection in the Real World

Authors

  • Todor Markov OpenAI
  • Chong Zhang OpenAI
  • Sandhini Agarwal OpenAI
  • Florentine Eloundou Nekoul OpenAI
  • Theodore Lee OpenAI
  • Steven Adler OpenAI
  • Angela Jiang OpenAI
  • Lilian Weng OpenAI

DOI:

https://doi.org/10.1609/aaai.v37i12.26752

Keywords:

General

Abstract

We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.

Downloads

Published

2023-06-26

How to Cite

Markov, T., Zhang, C., Agarwal, S., Eloundou Nekoul, F., Lee, T., Adler, S., Jiang, A., & Weng, L. (2023). A Holistic Approach to Undesired Content Detection in the Real World. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15009-15018. https://doi.org/10.1609/aaai.v37i12.26752

Issue

Section

AAAI Special Track on Safe and Robust AI