Federated Variational Inference: Towards Improved Personalization and Generalization

Authors

  • Elahe Vedadi Google
  • Joshua V. Dillon Google
  • Philip Andrew Mansfield Google
  • Karan Singhal Google
  • Arash Afkanpour Vector Institute
  • Warren Richard Morningstar Google

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31228

Keywords:

Federated Learning, Personalization, Distributed Machine Learning, Algorithms

Abstract

Conventional federated learning algorithms train a single global model by leveraging all participating clients’ data. However, due to heterogeneity in client generative distributions and predictive models, these approaches may not appropriately approximate the predictive process, converge to an optimal state, or generalize to new clients. We study personalization and generalization in stateless cross-device federated learning setups assuming heterogeneity in client data distributions and predictive models. We first propose a hierarchical generative model and formalize it using Bayesian Inference. We then approximate this process using Variational Inference to train our model efficiently. We call this algorithm Federated Variational Inference (FedVI). We use PAC-Bayes analysis to provide generalization bounds for FedVI. We evaluate our model on FEMNIST and CIFAR-100 image classification and show that FedVI beats the state-of-the-art on both tasks.

Downloads

Published

2024-05-20

How to Cite

Vedadi, E., Dillon, J. V., Mansfield, P. A., Singhal, K., Afkanpour, A., & Morningstar, W. R. (2024). Federated Variational Inference: Towards Improved Personalization and Generalization. Proceedings of the AAAI Symposium Series, 3(1), 323-327. https://doi.org/10.1609/aaaiss.v3i1.31228