LSHFed: Robust and Communication-Efficient Federated Learning with Locally-Sensitive Hashing Gradient Mapping
DOI:
https://doi.org/10.1609/aaai.v40i25.39184Abstract
Federated learning (FL) enables collaborative model training across distributed nodes without exposing raw data, but its decentralized nature makes it vulnerable in trust-deficient environments. Inference attacks may recover sensitive information from gradient updates, while poisoning attacks can degrade model performance or induce malicious behaviors. Existing defenses often suffer from high communication and computation costs, or limited detection precision. To address these issues, we propose LSHFed, a robust and communication-efficient FL framework that simultaneously enhances aggregation robustness and privacy preservation. At its core, LSHFed incorporates LSHGM, a novel gradient verification mechanism that projects high-dimensional gradients into compact binary representations via multi-hyperplane locality-sensitive hashing. This enables accurate detection and filtering of malicious gradients using only their irreversible hash forms, thus mitigating privacy leakage risks and substantially reducing transmission overhead. Extensive experiments demonstrate that LSHFed maintains high model performance even when up to 50% of participants are collusive adversaries, while achieving up to a 1000× reduction in gradient verification communication compared to full-gradient methods.Downloads
Published
2026-03-14
How to Cite
Cheng, G., Yang, M., Zhao, X., Yu, S., Du, T., Wu, Y., … Deng, S. (2026). LSHFed: Robust and Communication-Efficient Federated Learning with Locally-Sensitive Hashing Gradient Mapping. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20490–20498. https://doi.org/10.1609/aaai.v40i25.39184
Issue
Section
AAAI Technical Track on Machine Learning II