LiD-FL: Towards List-Decodable Federated Learning
DOI:
https://doi.org/10.1609/aaai.v39i18.34072Abstract
Federated learning is often used in environments with many unverified participants. Therefore, federated learning under adversarial attacks receives significant attention. This paper proposes an algorithmic framework for list-decodable federated learning, where a central server maintains a list of models, with at least one guaranteed to perform well. The framework has no strict restriction on the fraction of honest clients, extending the applicability of Byzantine federated learning to the scenario with more than half adversaries. Assuming the variance of gradient noise in stochastic gradient descent is bounded, we prove a convergence theorem of our method for strongly convex and smooth losses. Experimental results, including image classification tasks with both convex and non-convex losses, demonstrate that the proposed algorithm can withstand the malicious majority under various attacks.Downloads
Published
2025-04-11
How to Cite
Liu, H., Shan, L., Bao, H., You, R., Yi, Y., & Lv, J. (2025). LiD-FL: Towards List-Decodable Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 18825–18833. https://doi.org/10.1609/aaai.v39i18.34072
Issue
Section
AAAI Technical Track on Machine Learning IV