TY - JOUR AU - Liu, Kang AU - Tan, Benjamin AU - Garg, Siddharth PY - 2021/05/18 Y2 - 2024/03/29 TI - Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 17 SE - AAAI Special Track on AI for Social Impact DO - 10.1609/aaai.v35i17.17743 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17743 SP - 14849-14856 AB - Unprecedented data collection and sharing have exacerbated privacy concerns and led to increasing interest in privacy-preserving tools that remove sensitive attributes from images while maintaining useful information for other tasks. Currently, state-of-the-art approaches use privacy-preserving generative adversarial networks (PP-GANs) for this purpose, for instance, to enable reliable facial expression recognition without leaking users' identity. However, PP-GANs do not offer formal proofs of privacy and instead rely on experimentally measuring information leakage using classification accuracy on the sensitive attributes of deep learning (DL)-based discriminators. In this work, we question the rigor of such checks by subverting existing privacy-preserving GANs for facial expression recognition. We show that it is possible to hide the sensitive identification data in the sanitized output images of such PP-GANs for later extraction, which can even allow for reconstruction of the entire input images, while satisfying privacy checks. We demonstrate our approach via a PP-GAN-based architecture and provide qualitative and quantitative evaluations using two public datasets. Our experimental results raise fundamental questions about the need for more rigorous privacy checks of PP-GANs, and we provide insights into the social impact of these. ER -