InfoDecom: Decomposing Information for Defending Against Privacy Leakage in Split Inference

Authors

  • Ruijun Deng College of Computer Science and Artificial Intelligence, Fudan University
  • Zhihui Lu College of Computer Science and Artificial Intelligence, Fudan University
  • Qiang Duan Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v40i25.39212

Abstract

Split inference (SI) enables users to access deep learning (DL) services without directly transmitting raw data. However, recent studies reveal that data reconstruction attacks (DRAs) can recover the original inputs from the smashed data sent from the client to the server, leading to significant privacy leakage. While various defenses have been proposed, they often result in substantial utility degradation, particularly when the client-side model is shallow. We identify a key cause of this trade-off: existing defenses apply excessive perturbation to redundant information in the smashed data. To address this issue in computer vision tasks, we propose InfoDecom, a defense framework that first decomposes and removes redundant information and then injects noise calibrated to provide theoretically guaranteed privacy. Experiments demonstrate that InfoDecom achieves a superior utility-privacy trade-off compared to existing baselines.

Downloads

Published

2026-03-14

How to Cite

Deng, R., Lu, Z., & Duan, Q. (2026). InfoDecom: Decomposing Information for Defending Against Privacy Leakage in Split Inference. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20737–20745. https://doi.org/10.1609/aaai.v40i25.39212

Issue

Section

AAAI Technical Track on Machine Learning II