Reconciling Privacy and Byzantine-robustness in Federated Learning
DOI:
https://doi.org/10.1609/aaaiss.v3i1.31229Keywords:
ML: Distributed Machine Learning & Federated Learning, Collaborative Learning, Learning On The EdgeAbstract
In this talk, we will discuss how to make federated learning secure for the server and private for the clients simultaneously. Most prior efforts fall into either of the two categories. At one end of the spectrum, some work uses techniques like secure aggregation to hide the individual client’s updates and only reveal the aggregated global update to a malicious server that strives to infer the clients’ privacy from their updates. At the other end of the spectrum, some work uses Byzantine-robust FL protocols to suppress the influence of malicious clients’ updates. We present a protocol that offers bidirectional defense to simultaneously combat against the malicious centralized server and Byzantine malicious clients. Our protocol also improves the dimension dependence and achieve a near-optimal statistical rate for strongly convex cases.Downloads
Published
2024-05-20
How to Cite
Wang, L. (2024). Reconciling Privacy and Byzantine-robustness in Federated Learning. Proceedings of the AAAI Symposium Series, 3(1), 328-328. https://doi.org/10.1609/aaaiss.v3i1.31229
Issue
Section
Federated Learning on the Edge