Stable Learning via Sparse Variable Independence


  • Han Yu Tsinghua University
  • Peng Cui Tsinghua University
  • Yue He Tsinghua University
  • Zheyan Shen Tsinghua University
  • Yong Lin Hong Kong University of Science and Technology
  • Renzhe Xu Tsinghua University
  • Xingxuan Zhang Tsinghua University



ML: Causal Learning, PEAI: Safety, Robustness & Trustworthiness


The problem of covariate-shift generalization has attracted intensive research attention. Previous stable learning algorithms employ sample reweighting schemes to decorrelate the covariates when there is no explicit domain information about training data. However, with finite samples, it is difficult to achieve the desirable weights that ensure perfect independence to get rid of the unstable variables. Besides, decorrelating within stable variables may bring about high variance of learned models because of the over-reduced effective sample size. A tremendous sample size is required for these algorithms to work. In this paper, with theoretical justification, we propose SVI (Sparse Variable Independence) for the covariate-shift generalization problem. We introduce sparsity constraint to compensate for the imperfectness of sample reweighting under the finite-sample setting in previous methods. Furthermore, we organically combine independence-based sample reweighting and sparsity-based variable selection in an iterative way to avoid decorrelating within stable variables, increasing the effective sample size to alleviate variance inflation. Experiments on both synthetic and real-world datasets demonstrate the improvement of covariate-shift generalization performance brought by SVI.




How to Cite

Yu, H., Cui, P., He, Y., Shen, Z., Lin, Y., Xu, R., & Zhang, X. (2023). Stable Learning via Sparse Variable Independence. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10998-11006.



AAAI Technical Track on Machine Learning IV