Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction

Authors

  • NhatHai Phan University of Oregon
  • Yue Wang University of North Carolina at Charlotte
  • Xintao Wu University of Arkansas
  • Dejing Dou University of Oregon

DOI:

https://doi.org/10.1609/aaai.v30i1.10165

Keywords:

differential privacy, deep learning, health social network, human behavior prediction

Abstract

In recent years, deep learning has spread beyond both academia and industry with many exciting real-world applications. The development of deep learning has presented obvious privacy issues. However, there has been lack of scientific study about privacy preservation in deep learning. In this paper, we concentrate on the auto-encoder, a fundamental component in deep learning, and propose the deep private auto-encoder (dPA). Our main idea is to enforce ε-differential privacy by perturbing the objective functions of the traditional deep auto-encoder, rather than its results. We apply the dPA to human behavior prediction in a health social network. Theoretical analysis and thorough experimental evaluations show that the dPA is highly effective and efficient, and it significantly outperforms existing solutions.

Downloads

Published

2016-02-21

How to Cite

Phan, N., Wang, Y., Wu, X., & Dou, D. (2016). Differential Privacy Preservation for Deep Auto-Encoders: an Application of Human Behavior Prediction. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10165

Issue

Section

Technical Papers: Machine Learning Applications