Disentangled Information Bottleneck


  • Ziqi Pan Shanghai Jiao Tong University
  • Li Niu Shanghai Jiao Tong University
  • Jianfu Zhang RIKEN AIP;Shanghai Jiao Tong University
  • Liqing Zhang Shanghai Jiao Tong University


Representation Learning


The information bottleneck (IB) method is a technique for extracting information that is relevant for predicting the target random variable from the source random variable, which is typically implemented by optimizing the IB Lagrangian that balances the compression and prediction terms. However, the IB Lagrangian is hard to optimize, and multiple trials for tuning values of Lagrangian multiplier are required. Moreover, we show that the prediction performance strictly decreases as the compression gets stronger during optimizing the IB Lagrangian. In this paper, we implement the IB method from the perspective of supervised disentangling. Specifically, we introduce Disentangled Information Bottleneck (DisenIB) that is consistent on compressing source maximally without target prediction performance loss (maximum compression). Theoretical and experimental results demonstrate that our method is consistent on maximum compression, and performs well in terms of generalization, robustness to adversarial attack, out-of-distribution detection, and supervised disentangling.




How to Cite

Pan, Z., Niu, L., Zhang, J., & Zhang, L. (2021). Disentangled Information Bottleneck. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9285-9293. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17120



AAAI Technical Track on Machine Learning III