Invariant Information Bottleneck for Domain Generalization

Authors

  • Bo Li Microsoft Research Asia
  • Yifei Shen Hong Kong University of Science and Technology
  • Yezhen Wang Microsoft Research Asia
  • Wenzhen Zhu Washington University in St. Louis
  • Colorado Reed University of California, Berkeley
  • Dongsheng Li Microsoft Research Asia
  • Kurt Keutzer University of California, Berkeley
  • Han Zhao University of Illinois at Urbana-Champaign

DOI:

https://doi.org/10.1609/aaai.v36i7.20703

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Invariant risk minimization (IRM) has recently emerged as a promising alternative for domain generalization. Nevertheless, the loss function is difficult to optimize for nonlinear classifiers and the original optimization objective could fail when pseudo-invariant features and geometric skews exist. Inspired by IRM, in this paper we propose a novel formulation for domain generalization, dubbed invariant information bottleneck (IIB). IIB aims at minimizing invariant risks for nonlinear classifiers and simultaneously mitigating the impact of pseudo-invariant features and geometric skews. Specifically, we first present a novel formulation for invariant causal prediction via mutual information. Then we adopt the variational formulation of the mutual information to develop a tractable loss function for nonlinear classifiers. To overcome the failure modes of IRM, we propose to minimize the mutual information between the inputs and the corresponding representations. IIB significantly outperforms IRM on synthetic datasets, where the pseudo-invariant features and geometric skews occur, showing the effectiveness of proposed formulation in overcoming failure modes of IRM. Furthermore, experiments on DomainBed show that IIB outperforms 13 baselines by 0.9% on average across 7 real datasets.

Downloads

Published

2022-06-28

How to Cite

Li, B., Shen, Y., Wang, Y., Zhu, W., Reed, C., Li, D., Keutzer, K., & Zhao, H. (2022). Invariant Information Bottleneck for Domain Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7399-7407. https://doi.org/10.1609/aaai.v36i7.20703

Issue

Section

AAAI Technical Track on Machine Learning II