You Can Have Your Cake and Eat It Too: Ensuring Practical Robustness and Privacy in Federated Learning

Authors

  • Nojan Sheybani University of California, San Diego
  • Farinaz Koushanfar University of California, San Diego

DOI:

https://doi.org/10.1609/aaaiss.v3i1.31225

Keywords:

ML: Distributed Machine Learning & Federated Learning, Learning On The Edge, Secure And Private Federated Learning

Abstract

Inherently, federated learning (FL) robustness is very challenging to guarantee, especially when trying to maintain privacy. Compared to standard ML settings, FL's open training process allows for malicious clients to easily go under the radar. Alongside this, malicious clients can easily collude to attack the training process continuously, and without detection. FL models are also still susceptible to attacks on standard ML training procedures. This massive attack surface makes balancing the tradeoff between utility, practicality, robustness, and privacy extremely challenging. While there have been proposed defenses to attacks using popular privacy-preserving primitives, such as fully homomorphic encryption, they often face trouble balancing an all-important question that is present in all privacy-preserving systems: How much utility and practicality am I willing to give up to ensure privacy and robustness? In this work, we discuss a practical approach towards secure and robust FL and the challenges that face this field of emerging research.

Downloads

Published

2024-05-20