Key Feature Replacement of In-Distribution Samples for Out-of-Distribution Detection

Authors

  • Jaeyoung Kim VUNO Inc.
  • Seo Taek Kong University of Illinois, Urbana-Champaign
  • Dongbin Na VUNO Inc.
  • Kyu-Hwan Jung Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University

DOI:

https://doi.org/10.1609/aaai.v37i7.25995

Keywords:

ML: Calibration & Uncertainty Quantification, ML: Classification and Regression

Abstract

Out-of-distribution (OOD) detection can be used in deep learning-based applications to reject outlier samples from being unreliably classified by deep neural networks. Learning to classify between OOD and in-distribution samples is difficult because data comprising the former is extremely diverse. It has been observed that an auxiliary OOD dataset is most effective in training a ``rejection'' network when its samples are semantically similar to in-distribution images. We first deduce that OOD images are perceived by a deep neural network to be semantically similar to in-distribution samples when they share a common background, as deep networks are observed to incorrectly classify such images with high confidence. We then propose a simple yet effective Key In-distribution feature Replacement BY inpainting (KIRBY) procedure that constructs a surrogate OOD dataset by replacing class-discriminative features of in-distribution samples with marginal background features. The procedure can be implemented using off-the-shelf vision algorithms, where each step within the algorithm is shown to make the surrogate data increasingly similar to in-distribution data. Design choices in each step are studied extensively, and an exhaustive comparison with state-of-the-art algorithms demonstrates KIRBY's competitiveness on various benchmarks.

Downloads

Published

2023-06-26

How to Cite

Kim, J., Kong, S. T., Na, D., & Jung, K.-H. (2023). Key Feature Replacement of In-Distribution Samples for Out-of-Distribution Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8246-8254. https://doi.org/10.1609/aaai.v37i7.25995

Issue

Section

AAAI Technical Track on Machine Learning II