CASE: Exploiting Intra-class Compactness and Inter-class Separability of Feature Embeddings for Out-of-Distribution Detection

Authors

  • Shuai Feng Nanjing University
  • Pengsheng Jin Nanjing University
  • Chongjun Wang Nanjing University

DOI:

https://doi.org/10.1609/aaai.v38i19.30100

Keywords:

General

Abstract

Detecting out-of-distribution (OOD) inputs is critical for reliable machine learning, but deep neural networks often make overconfident predictions, even for OOD inputs that deviate from the distribution of training data. Prior methods relied on the widely used softmax cross-entropy (CE) loss that is adequate for classifying in-distribution (ID) samples but not optimally designed for OOD detection. To address this issue, we propose CASE, a simple and effective OOD detection method by explicitly improving intra-class Compactness And inter-class Separability of feature Embeddings. To enhance the separation between ID and OOD samples, CASE uses a dual-loss framework, which includes a separability loss that maximizes the inter-class Euclidean distance to promote separability among different class centers, along with a compactness loss that minimizes the intra-class Euclidean distance to encourage samples to be close to their class centers. In particular, the class centers are defined as a free optimization parameter of the model and updated by gradient descent, which is simple and further enhances the OOD detection performance. Extensive experiments demonstrate the superiority of CASE, which reduces the average FPR95 by 37.11% and improves the average AUROC by 15.89% compared to the baseline method using a softmax confidence score on the more challenging CIFAR-100 model.

Published

2024-03-24

How to Cite

Feng, S., Jin, P., & Wang, C. (2024). CASE: Exploiting Intra-class Compactness and Inter-class Separability of Feature Embeddings for Out-of-Distribution Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21081-21089. https://doi.org/10.1609/aaai.v38i19.30100

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track