Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning
DOI:
https://doi.org/10.1609/aaai.v37i8.26177Keywords:
ML: Privacy-Aware ML, PEAI: Privacy and SecurityAbstract
Secure aggregation is a critical component in federated learning (FL), which enables the server to learn the aggregate model of the users without observing their local models. Conventionally, secure aggregation algorithms focus only on ensuring the privacy of individual users in a single training round. We contend that such designs can lead to significant privacy leakages over multiple training rounds, due to partial user selection/participation at each round of FL. In fact, we show that the conventional random user selection strategies in FL lead to leaking users' individual models within number of rounds that is linear in the number of users. To address this challenge, we introduce a secure aggregation framework, Multi-RoundSecAgg, with multi-round privacy guarantees. In particular, we introduce a new metric to quantify the privacy guarantees of FL over multiple training rounds, and develop a structured user selection strategy that guarantees the long-term privacy of each user (over any number of training rounds). Our framework also carefully accounts for the fairness and the average number of participating users at each round. Our experiments on MNIST, CIFAR-10 and CIFAR-100 datasets in the IID and the non-IID settings demonstrate the performance improvement over the baselines, both in terms of privacy protection and test accuracy.Downloads
Published
2023-06-26
How to Cite
So, J., E. Ali, R., Güler, B., Jiao, J., & Avestimehr, A. S. (2023). Securing Secure Aggregation: Mitigating Multi-Round Privacy Leakage in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9864-9873. https://doi.org/10.1609/aaai.v37i8.26177
Issue
Section
AAAI Technical Track on Machine Learning III