A Multi-Agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning

Authors

  • Sai Qian Zhang Harvard University
  • Jieyu Lin University of Toronto
  • Qi Zhang Microsoft

DOI:

https://doi.org/10.1609/aaai.v36i8.20894

Keywords:

Machine Learning (ML), Domain(s) Of Application (APP), Multiagent Systems (MAS)

Abstract

Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model by aggregating locally computed models without exposing their raw data. While most of the existing work focuses on improving the FL model accuracy, in this paper, we focus on the improving the training efficiency, which is often a hurdle for adopting FL in real world applications. Specifically, we design an efficient FL framework which jointly optimizes model accuracy, processing latency and communication efficiency, all of which are primary design considerations for real implementation of FL. Inspired by the recent success of Multi Agent Reinforcement Learning (MARL) in solving complex control problems, we present FedMarl, a federated learning framework that relies on trained MARL agents to perform efficient run-time client selection. Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.

Downloads

Published

2022-06-28

How to Cite

Zhang, S. Q., Lin, J., & Zhang, Q. (2022). A Multi-Agent Reinforcement Learning Approach for Efficient Client Selection in Federated Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9091-9099. https://doi.org/10.1609/aaai.v36i8.20894

Issue

Section

AAAI Technical Track on Machine Learning III