Proactive Privacy-preserving Learning for Retrieval

Authors

  • Peng-Fei Zhang University of Queensland
  • Zi Huang University of Queensland
  • Xin-Shun Xu Shandong University

DOI:

https://doi.org/10.1609/aaai.v35i4.16449

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy

Abstract

Deep Neural Networks (DNNs) have recently achieved remarkable performance in image retrieval, yet posing great threats to data privacy. On the one hand, one may misuse a deployed DNNs based system to look up data without consent. On the other hand, organizations or individuals would legally or illegally collect data to train high-performance models outside the scope of legitimate purposes. Unfortunately, less effort has been made to safeguard data privacy against malicious uses of DNNs. In this paper, we propose a data-centric Proactive Privacy-preserving Learning (PPL) algorithm for hashing based retrieval, which achieves the protection purpose by employing a generator to transfer the original data into the adversarial data with quasi-imperceptible perturbations before releasing them. When the data source is infiltrated, the adversarial data can confuse menacing retrieval models to make erroneous predictions. Given that the prior knowledge of malicious models is not available, a surrogate retrieval model is instead introduced acting as a fooling target. The framework is trained by a two-player game conducted between the generator and the surrogate model. More specifically, the generator is updated to enlarge the gap between the adversarial data and the original data, aiming to lower the search accuracy of the surrogate model. On the contrary, the surrogate model is trained with the opposing objective that is to maintain the search performance. As a result, an effective and robust adversarial generator is encouraged. Furthermore, to facilitate an effective optimization, a Gradient Reversal Layer (GRL) module is inserted to connect two models, enabling the two-player game in a one-step learning. Extensive experiments on three widely-used realistic datasets prove the effectiveness of the proposed method.

Downloads

Published

2021-05-18

How to Cite

Zhang, P.-F., Huang, Z., & Xu, X.-S. (2021). Proactive Privacy-preserving Learning for Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3369-3376. https://doi.org/10.1609/aaai.v35i4.16449

Issue

Section

AAAI Technical Track on Computer Vision III