Pterodactyl: Two-Step Redaction of Images for Robust Face Deidentification

Authors

  • Abdullah Alshaibani Purdue University
  • Alexander J. Quinn Purdue University

Keywords:

Crowdsourcing, Redaction, Image Processing, Privacy, Human Perception, Machine Learning

Abstract

Redacting faces in images is trivial when the number of faces is small and the annotator is trusted. For large batches, automated face detection has been the only currently viable solution, yet even the best ML-based solutions have error rates that would be unacceptable for sensitive applications. Crowd-based face detection/redaction systems exist, yet the process and the cost make them not feasible. We present Pterodactyl, a system for detecting (and redacting) faces at scale. It uses the AdaptiveFocus filter, which splits the image into smaller regions and uses machine learning to select a median filter for each region to hide the facial identities in the image while simultaneously allowing those faces to be detectable by crowd workers. The filter uses a convolutional neural network trained on images associated with the median filter level that allows detection and prevents identification. This filter allows Pterodactyl to achieve human-level detection with just 14% crowd labor as another recent crowd-based face detection/redaction system (IntoFocus). Our evaluation found that the redaction accuracy was higher than a commercial machine-based application and on par with IntoFocus while requiring 86% less crowd work (number of comparable tasks).

Downloads

Published

2021-10-04

How to Cite

Alshaibani, A., & Quinn, A. J. (2021). Pterodactyl: Two-Step Redaction of Images for Robust Face Deidentification. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 9(1), 27-34. Retrieved from https://ojs.aaai.org/index.php/HCOMP/article/view/18937