Subspace Differential Privacy

Authors

  • Jie Gao Rutgers University
  • Ruobin Gong Rutgers University
  • Fang-Yi Yu Harvard University

DOI:

https://doi.org/10.1609/aaai.v36i4.20315

Keywords:

Data Mining & Knowledge Management (DMKM)

Abstract

Many data applications have certain invariant constraints due to practical needs. Data curators who employ differential privacy need to respect such constraints on the sanitized data product as a primary utility requirement. Invariants challenge the formulation, implementation, and interpretation of privacy guarantees. We propose subspace differential privacy, to honestly characterize the dependence of the sanitized output on confidential aspects of the data. We discuss two design frameworks that convert well-known differentially private mechanisms, such as the Gaussian and the Laplace mechanisms, to subspace differentially private ones that respect the invariants specified by the curator. For linear queries, we discuss the design of near-optimal mechanisms that minimize the mean squared error. Subspace differentially private mechanisms rid the need for post-processing due to invariants, preserve transparency and statistical intelligibility of the output, and can be suitable for distributed implementation. We showcase the proposed mechanisms on the 2020 Census Disclosure Avoidance demonstration data, and a spatio-temporal dataset of mobile access point connections on a large university campus.

Downloads

Published

2022-06-28

How to Cite

Gao, J., Gong, R., & Yu, F.-Y. (2022). Subspace Differential Privacy. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 3986-3995. https://doi.org/10.1609/aaai.v36i4.20315

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management