Disentangled Information Bottleneck

Authors

  • Ziqi Pan Shanghai Jiao Tong University
  • Li Niu Shanghai Jiao Tong University
  • Jianfu Zhang RIKEN AIP;Shanghai Jiao Tong University
  • Liqing Zhang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v35i10.17120

Keywords:

Representation Learning

Abstract

The information bottleneck (IB) method is a technique for extracting information that is relevant for predicting the target random variable from the source random variable, which is typically implemented by optimizing the IB Lagrangian that balances the compression and prediction terms. However, the IB Lagrangian is hard to optimize, and multiple trials for tuning values of Lagrangian multiplier are required. Moreover, we show that the prediction performance strictly decreases as the compression gets stronger during optimizing the IB Lagrangian. In this paper, we implement the IB method from the perspective of supervised disentangling. Specifically, we introduce Disentangled Information Bottleneck (DisenIB) that is consistent on compressing source maximally without target prediction performance loss (maximum compression). Theoretical and experimental results demonstrate that our method is consistent on maximum compression, and performs well in terms of generalization, robustness to adversarial attack, out-of-distribution detection, and supervised disentangling.

Downloads

Published

2021-05-18

How to Cite

Pan, Z., Niu, L., Zhang, J., & Zhang, L. (2021). Disentangled Information Bottleneck. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9285-9293. https://doi.org/10.1609/aaai.v35i10.17120

Issue

Section

AAAI Technical Track on Machine Learning III