Learning Model-Based Privacy Protection under Budget Constraints


  • Junyuan Hong Michigan State University
  • Haotao Wang University of Texas at Austin
  • Zhangyang Wang University of Texas at Austin
  • Jiayu Zhou Michigan State University




Ethics -- Bias, Fairness, Transparency & Privacy


Protecting privacy in gradient-based learning has become increasingly critical as more sensitive information is being used. Many existing solutions seek to protect the sensitive gradients by constraining the overall privacy cost within a constant budget, where the protection is hand-designed and empirically calibrated to boost the utility of the resulting model. However, it remains challenging to choose the proper protection adapted for specific constraints so that the utility is maximized. To this end, we propose a novel Learning-to-Protect algorithm that automatically learns a model-based protector from a set of non-private learning tasks. The learned protector can be applied to private learning tasks to improve utility within the specific privacy budget constraint. Our empirical studies on both synthetic and real datasets demonstrate that the proposed algorithm can achieve a superior utility with a given privacy constraint and generalize well to new private datasets distributed differently as compared to the hand-designed competitors.




How to Cite

Hong, J., Wang, H., Wang, Z., & Zhou, J. . (2021). Learning Model-Based Privacy Protection under Budget Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7702-7710. https://doi.org/10.1609/aaai.v35i9.16941



AAAI Technical Track on Machine Learning II