Robustness to Spurious Correlations Improves Semantic Out-of-Distribution Detection

Authors

  • Lily H. Zhang New York University
  • Rajesh Ranganath New York University

DOI:

https://doi.org/10.1609/aaai.v37i12.26785

Keywords:

General

Abstract

Methods which utilize the outputs or feature representations of predictive models have emerged as promising approaches for out-of-distribution (OOD) detection of image inputs. However, as demonstrated in previous work, these methods struggle to detect OOD inputs that share nuisance values (e.g. background) with in-distribution inputs. The detection of shared-nuisance OOD (SN-OOD) inputs is particularly relevant in real-world applications, as anomalies and in-distribution inputs tend to be captured in the same settings during deployment. In this work, we provide a possible explanation for these failures and propose nuisance-aware OOD detection to address them. Nuisance-aware OOD detection substitutes a classifier trained via Empirical Risk Minimization (ERM) with one that 1. approximates a distribution where the nuisance-label relationship is broken and 2. yields representations that are independent of the nuisance under this distribution, both marginally and conditioned on the label. We can train a classifier to achieve these objectives using Nuisance-Randomized Distillation (NuRD), an algorithm developed for OOD generalization under spurious correlations. Output- and feature-based nuisance-aware OOD detection perform substantially better than their original counterparts, succeeding even when detection based on domain generalization algorithms fails to improve performance.

Downloads

Published

2023-06-26

How to Cite

Zhang, L. H., & Ranganath, R. (2023). Robustness to Spurious Correlations Improves Semantic Out-of-Distribution Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 15305-15312. https://doi.org/10.1609/aaai.v37i12.26785

Issue

Section

AAAI Special Track on Safe and Robust AI