Stable Adversarial Learning under Distributional Shifts

Authors

  • Jiashuo Liu Tsinghua University
  • Zheyan Shen Tsinghua University
  • Peng Cui Tsinghua University
  • Linjun Zhou Tsinghua University
  • Kun Kuang Zhejiang University
  • Bo Li Tsinghua University
  • Yishi Lin Tencent

DOI:

https://doi.org/10.1609/aaai.v35i10.17050

Keywords:

Adversarial Learning & Robustness, Causal Learning, Classification and Regression

Abstract

Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data. Recently, there are robust learning methods aiming at this problem by minimizing the worst-case risk over an uncertainty set. However, they equally treat all covariates to form the decision sets regardless of the stability of their correlations with the target, resulting in the overwhelmingly large set and low confidence of the learner. In this paper, we propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target. We theoretically show that our method is tractable for stochastic gradient-based optimization and provide the performance guarantees for our method. Empirical studies on both simulation and real datasets validate the effectiveness of our method in terms of uniformly good performance across unknown distributional shifts.

Downloads

Published

2021-05-18

How to Cite

Liu, J., Shen, Z., Cui, P., Zhou, L., Kuang, K., Li, B., & Lin, Y. (2021). Stable Adversarial Learning under Distributional Shifts. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8662-8670. https://doi.org/10.1609/aaai.v35i10.17050

Issue

Section

AAAI Technical Track on Machine Learning III