Representation Magnitude Has a Liability to Privacy Vulnerability

Authors

  • Xingli Fang North Carolina State University
  • Jung-Eun Kim North Carolina State University

DOI:

https://doi.org/10.1609/aies.v7i1.31646

Abstract

The privacy-preserving approaches to machine learning (ML) models have made substantial progress in recent years. However, it is still opaque in which circumstances and conditions the model becomes privacy-vulnerable, leading to a challenge for ML models to maintain both performance and privacy. In this paper, we first explore the disparity between member and non-member data in the representation of models under common training frameworks.We identify how the representation magnitude disparity correlates with privacy vulnerability and address how this correlation impacts privacy vulnerability. Based on the observations, we propose Saturn Ring Classifier Module (SRCM), a plug-in model-level solution to mitigate membership privacy leakage. Through a confined yet effective representation space, our approach ameliorates models’ privacy vulnerability while maintaining generalizability. The code of this work can be found here: https://github.com/JEKimLab/AIES2024SRCM

Downloads

Published

2024-10-16

How to Cite

Fang, X., & Kim, J.-E. (2024). Representation Magnitude Has a Liability to Privacy Vulnerability. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 411-420. https://doi.org/10.1609/aies.v7i1.31646