Operationalizing Content Moderation “Accuracy” in the Digital Services Act
Abstract
The Digital Services Act, recently adopted by the EU, requires social media platforms to report the ``accuracy'' of their automated content moderation systems. The colloquial term is vague, or open-textured---the literal accuracy (number of correct predictions divided by the total) is not suitable for problems with large class imbalance, and the ground truth and dataset to measure accuracy against is unspecified. Without further specification, the regulatory requirement allows for deficient reporting. In this interdisciplinary work, we operationalize ``accuracy'' reporting by refining legal concepts and relating them to technical implementation. We start by elucidating the legislative purpose of the Act to legally justify an interpretation of ``accuracy'' as precision and recall. These metrics remain informative in class imbalanced settings, and reflect the proportional balancing of Fundamental Rights of the EU Charter. We then focus on the estimation of recall, as its naive estimation can incur extremely high annotation costs and disproportionately interfere with the platform's right to conduct business. Through a simulation study, we show that recall can be efficiently estimated using stratified sampling with trained classifiers, and provide concrete recommendations for its application. Finally, we present a case study of recall reporting for a subset of Reddit under the Act. Based on the language in the Act, we identify a number of ways recall could be reported due to underspecification. We report on one possibility using our improved estimator, and discuss the implications and areas for further legal clarification.Downloads
Published
2024-10-16
Issue
Section
Full Archival Papers