Learning Model-Based Privacy Protection under Budget Constraints

Authors

  • Junyuan Hong Michigan State University
  • Haotao Wang University of Texas at Austin
  • Zhangyang Wang University of Texas at Austin
  • Jiayu Zhou Michigan State University

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy

Abstract

Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant budget, where the protection is hand-designed and empirically calibrated to boost the utility of the resulting model. However, it remains challenging to choose the proper protection adapted for specific constraints so that the utility is maximized. To this end, we propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks. The learned protector can be applied to private learning tasks to improve utility within the specific privacy budget constraint. Our empirical studies on both synthetic and real datasets demonstrate that the proposed algorithm can achieve a superior utility with a given privacy constraint and generalize well to new private datasets distributed differently as compared to the hand-designed competitors.

Downloads

Published

2021-05-18

How to Cite

Hong, J., Wang, H., Wang, Z., & Zhou, J. . (2021). Learning Model-Based Privacy Protection under Budget Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7702-7710. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16941

Issue

Section

AAAI Technical Track on Machine Learning II