Federated Unlearning with Gradient Descent and Conflict Mitigation

Authors

  • Zibin Pan The Chinese University of Hong Kong, Shenzhen
  • Zhichao Wang The Chinese University of HongKong, Shenzhen
  • Chi Li The Chinese University of Hong Kong, Shenzhen Shenzhen Research Institute of Big Data
  • Kaiyan Zheng University of Michigan - Ann Arbor
  • Boqi Wang The Chinese University of Hong Kong, Shenzhen
  • Xiaoying Tang The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society The Guangdong Provincial Key Laboratory of Future Networks of Intelligence
  • Junhua Zhao The Chinese University of Hong Kong, Shenzhen The Shenzhen Institute of Artificial Intelligence and Robotics for Society

DOI:

https://doi.org/10.1609/aaai.v39i19.34181

Abstract

Federated Learning (FL) has received much attention in recent years. However, although clients are not required to share their data in FL, the global model itself can implicitly remember clients' local data. Therefore, it’s necessary to effectively remove the target client's data from the FL global model to ease the risk of privacy leakage and implement "the right to be forgotten". Federated Unlearning (FU) has been considered a promising solution to remove data without full retraining. But the model utility easily suffers significant reduction during unlearning due to the gradient conflicts. Furthermore, when conducting the post-training to recovery the model utility, it’s prone to move back and revert what have already been unlearned. To address these issues, we propose Federated Unlearning with Orthogonal Steepest Descent (FedOSD). We first design an unlearning cross entropy loss to overcome the convergence issue of the gradient ascent. A steepest descent direction for unlearning is then calculated in the condition of being non-conflicting with other clients’ gradients and closest to the target client's gradient. This benefits to efficiently unlearn and mitigate the model utility reduction. After unlearning, we recover the model utility by maintaining the achievement of unlearning. Finally, extensive experiments in several FL scenarios verify that FedOSD outperforms the SOTA FU algorithms in terms of unlearning and the model utility.

Published

2025-04-11

How to Cite

Pan, Z., Wang, Z., Li, C., Zheng, K., Wang, B., Tang, X., & Zhao, J. (2025). Federated Unlearning with Gradient Descent and Conflict Mitigation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 19804–19812. https://doi.org/10.1609/aaai.v39i19.34181

Issue

Section

AAAI Technical Track on Machine Learning V