Federated Learning for Face Recognition with Gradient Correction

Authors

  • Yifan Niu Beijing University of Posts and Telecommunications
  • Weihong Deng Beijing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v36i2.20095

Keywords:

Computer Vision (CV)

Abstract

With increasing appealing to privacy issues in face recognition, federated learning has emerged as one of the most prevalent approaches to study the unconstrained face recognition problem with private decentralized data. However, conventional decentralized federated algorithm sharing whole parameters of networks among clients suffers from privacy leakage in face recognition scene. In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition and guarantees higher privacy. We explore a novel idea of correcting gradients from the perspective of backward propagation and propose a softmax-based regularizer to correct gradients of class embeddings by precisely injecting a cross-client gradient term. Theoretically, we show that FedGC constitutes a valid loss function similar to standard softmax. Extensive experiments have been conducted to validate the superiority of FedGC which can match the performance of conventional centralized methods utilizing full training dataset on several popular benchmark datasets.

Downloads

Published

2022-06-28

How to Cite

Niu, Y., & Deng, W. (2022). Federated Learning for Face Recognition with Gradient Correction. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1999-2007. https://doi.org/10.1609/aaai.v36i2.20095

Issue

Section

AAAI Technical Track on Computer Vision II