On the Differential Privacy of Bayesian Inference

Authors

  • Zuhe Zhang University of Melbourne
  • Benjamin Rubinstein University of Melbourne
  • Christos Dimitrakakis Univ-Lille-3 and Chalmers University of Technology

DOI:

https://doi.org/10.1609/aaai.v30i1.10254

Abstract

We study how to communicate findings of Bayesian inference to third parties, while preserving the strong guarantee of differential privacy. Our main contributions are four different algorithms for private Bayesian inference on probabilistic graphical models. These include two mechanisms for adding noise to the Bayesian updates, either directly to the posterior parameters, or to their Fourier transform so as to preserve update consistency. We also utilise a recently introduced posterior sampling mechanism, for which we prove bounds for the specific but general case of discrete Bayesian networks; and we introduce a maximum-a-posteriori private mechanism. Our analysis includes utility and privacy bounds, with a novel focus on the influence of graph structure on privacy. Worked examples and experiments with Bayesian naive Bayes and Bayesian linear regression illustrate the application of our mechanisms.

Downloads

Published

2016-03-02

How to Cite

Zhang, Z., Rubinstein, B., & Dimitrakakis, C. (2016). On the Differential Privacy of Bayesian Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10254

Issue

Section

Technical Papers: Machine Learning Methods