Privacy-Preserving Gradient Boosting Decision Trees

Authors

  • Qinbin Li National University of Singapore
  • Zhaomin Wu National University of Singapore
  • Zeyi Wen The University of Western Australia
  • Bingsheng He National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v34i01.5422

Abstract

The Gradient Boosting Decision Tree (GBDT) is a popular machine learning model for various tasks in recent years. In this paper, we study how to improve model accuracy of GBDT while preserving the strong guarantee of differential privacy. Sensitivity and privacy budget are two key design aspects for the effectiveness of differential private models. Existing solutions for GBDT with differential privacy suffer from the significant accuracy loss due to too loose sensitivity bounds and ineffective privacy budget allocations (especially across different trees in the GBDT model). Loose sensitivity bounds lead to more noise to obtain a fixed privacy level. Ineffective privacy budget allocations worsen the accuracy loss especially when the number of trees is large. Therefore, we propose a new GBDT training algorithm that achieves tighter sensitivity bounds and more effective noise allocations. Specifically, by investigating the property of gradient and the contribution of each tree in GBDTs, we propose to adaptively control the gradients of training data for each iteration and leaf node clipping in order to tighten the sensitivity bounds. Furthermore, we design a novel boosting framework to allocate the privacy budget between trees so that the accuracy loss can be further reduced. Our experiments show that our approach can achieve much better model accuracy than other baselines.

Downloads

Published

2020-04-03

How to Cite

Li, Q., Wu, Z., Wen, Z., & He, B. (2020). Privacy-Preserving Gradient Boosting Decision Trees. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 784-791. https://doi.org/10.1609/aaai.v34i01.5422

Issue

Section

AAAI Technical Track: Applications