How Does the Smoothness Approximation Method Facilitate Generalization for Federated Adversarial Learning?

Authors

  • Wenjun Ding School of Computer Science and Engineering, Central South University, Changsha, China Xiangjiang Laboratory, Changsha, China
  • Ying An Big Data Institute, Central South University, Changsha, China
  • Lixing Chen School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China Shanghai Key Laboratory of Integrated Administration Technologies for Information Security, Shanghai, China
  • Shichao Kan School of Computer Science and Engineering, Central South University, Changsha, China
  • Fan Wu School of Computer Science and Engineering, Central South University, Changsha, China
  • Zhe Qu School of Computer Science and Engineering, Central South University, Changsha, China Xiangjiang Laboratory, Changsha, China

DOI:

https://doi.org/10.1609/aaai.v39i15.33788

Abstract

Federated Adversarial Learning (FAL) is a robust framework for resisting adversarial attacks on federated learning. Although some FAL studies have developed efficient algorithms, they primarily focus on convergence performance and overlook generalization. Generalization is crucial for evaluating algorithm performance on unseen data. However, generalization analysis is more challenging due to non-smooth adversarial loss functions. A common approach to addressing this issue is to leverage smoothness approximation. In this paper, we develop algorithm stability measures to evaluate the generalization performance of two popular FAL algorithms: Vanilla FAL (VFAL) and Slack FAL (SFAL), using three different smooth approximation methods: 1) Surrogate Smoothness Approximation (SSA), (2) Randomized Smoothness Approximation (RSA), and (3) Over-Parameterized Smoothness Approximation (OPSA). Based on our in-depth analysis, we answer how to properly set the smoothness approximation method to mitigate generalization error in FAL. Moreover, we identify RSA as the most effective generalization error reduction method. In highly data-heterogeneous scenarios, we also recommend employing SFAL to mitigate the deterioration of generalization performance caused by heterogeneity. Based on our theoretical results, we provide insights to help develop more efficient FAL algorithms, such as designing new metrics and dynamic aggregation rules to mitigate heterogeneity.

Downloads

Published

2025-04-11

How to Cite

Ding, W., An, Y., Chen, L., Kan, S., Wu, F., & Qu, Z. (2025). How Does the Smoothness Approximation Method Facilitate Generalization for Federated Adversarial Learning?. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 16280–16288. https://doi.org/10.1609/aaai.v39i15.33788

Issue

Section

AAAI Technical Track on Machine Learning I