Integer Is Enough: When Vertical Federated Learning Meets Rounding

Authors

  • Pengyu Qiu Zhejiang University Ant Group
  • Yuwen Pu Zhejiang university
  • Yongchao Liu Ant Group
  • Wenyan Liu Ant Group Zhejiang University
  • Yun Yue Ant Group
  • Xiaowei Zhu Ant Group
  • Lichun Li Ant Group
  • Jinbao Li Qilu University of Technology
  • Shouling Ji Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v38i13.29388

Keywords:

ML: Distributed Machine Learning & Federated Learning

Abstract

Vertical Federated Learning (VFL) is a solution increasingly used by companies with the same user group but differing features, enabling them to collaboratively train a machine learning model. VFL ensures that clients exchange intermediate results extracted by their local models, without sharing raw data. However, in practice, VFL encounters several challenges, such as computational and communication overhead, privacy leakage risk, and adversarial attack. Our study reveals that the usage of floating-point (FP) numbers is a common factor causing these issues, as they can be redundant and contain too much information. To address this, we propose a new architecture called rounding layer, which converts intermediate results to integers. Our theoretical analysis and empirical results demonstrate the benefits of the rounding layer in reducing computation and memory overhead, providing privacy protection, preserving model performance, and mitigating adversarial attacks. We hope this paper inspires further research into novel architectures to address practical issues in VFL.

Published

2024-03-24

How to Cite

Qiu, P., Pu, Y., Liu, Y., Liu, W., Yue, Y., Zhu, X., Li, L., Li, J., & Ji, S. (2024). Integer Is Enough: When Vertical Federated Learning Meets Rounding. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14704-14712. https://doi.org/10.1609/aaai.v38i13.29388

Issue

Section

AAAI Technical Track on Machine Learning IV