Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms
DOI:
https://doi.org/10.1609/aaai.v38i1.27796Keywords:
APP: Web, ML: Distributed Machine Learning & Federated LearningAbstract
Learning-based adaptive bitrate (ABR) algorithms have revolutionized video streaming solutions. With the growing demand for data privacy and the rapid development of mobile devices, federated learning (FL) has emerged as a popular training method for neural ABR algorithms in both academia and industry. However, we have discovered that FL-based ABR models are vulnerable to model-poisoning attacks as local updates remain unseen during global aggregation. In response, we propose MAFL (Malicious ABR model based on Federated Learning) to prove that backdooring the learning-based ABR model via FL is practical. Instead of attacking the global policy, MAFL only targets a single ``target client''. Moreover, the unique challenges brought by deep reinforcement learning (DRL) make the attack even more challenging. To address these challenges, MAFL is designed with a two-stage attacking mechanism. Using two representative attack cases with real-world traces, we show that MAFL significantly degrades the model performance on the target client (i.e., increasing rebuffering penalty by 2x and 5x) with a minimal negative impact on benign clients.Downloads
Published
2024-03-25
How to Cite
Zhang, R.-X., & Huang, T. (2024). Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms. Proceedings of the AAAI Conference on Artificial Intelligence, 38(1), 419-427. https://doi.org/10.1609/aaai.v38i1.27796
Issue
Section
AAAI Technical Track on Application Domains