MGIA: Mutual Gradient Inversion Attack in Multi-Modal Federated Learning (Student Abstract)

Authors

  • Xuan Liu The Hong Kong Polytechnic University
  • Siqi Cai Wuhan University of Technology
  • Lin Li Wuhan University of Technology
  • Rui Zhang The Hong Kong Polytechnic University
  • Song Guo The Hong Kong Polytechnic University

DOI:

https://doi.org/10.1609/aaai.v37i13.26995

Keywords:

Federated Learning, Gradient Inversion Attack, Multimodal Machine Learning, Knowledge Distillation, Information Security

Abstract

Recent studies have demonstrated that local training data in Federated Learning can be recovered from gradients, which are called gradient inversion attacks. These attacks display powerful effects on either computer vision or natural language processing tasks. As it is known that there are certain correlations between multi-modality data, we argue that the threat of such attacks combined with Multi-modal Learning may cause more severe effects. Different modalities may communicate through gradients to provide richer information for the attackers, thus improving the strength and efficiency of the gradient inversion attacks. In this paper, we propose the Mutual Gradient Inversion Attack (MGIA), by utilizing the shared labels between image and text modalities combined with the idea of knowledge distillation. Our experimental results show that MGIA achieves the best quality of both modality data and label recoveries in comparison with other methods. In the meanwhile, MGIA verifies that multi-modality gradient inversion attacks are more likely to disclose private information than the existing single-modality attacks.

Downloads

Published

2024-07-15

How to Cite

Liu, X., Cai, S., Li, L., Zhang, R., & Guo, S. (2024). MGIA: Mutual Gradient Inversion Attack in Multi-Modal Federated Learning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16270-16271. https://doi.org/10.1609/aaai.v37i13.26995