Mahalanobis-Aware Training for Out-of-Distribution Detection

Authors

  • Connor Mclaughlin MIT Lincoln Laboratory Northeastern University
  • Jason Matterer MIT Lincoln Laboratory STR
  • Michael Yee MIT Lincoln Laboratory

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27681

Keywords:

Machine Learning (ML), Reliability, Safety, Computer Vision (CV)

Abstract

While deep learning models have seen widespread success in controlled environments, there are still barriers to their adoption in open-world settings. One critical task for safe deployment is the detection of anomalous or out-of-distribution samples that may require human intervention. In this work, we present a novel loss function and recipe for training networks with improved density-based out-of-distribution sensitivity. We demonstrate the effectiveness of our method on CIFAR-10, notably reducing the false-positive rate of the relative Mahalanobis distance method on far-OOD tasks by over 50%.

Downloads

Published

2024-01-22

Issue

Section

Assured and Trustworthy Human-centered AI (ATHAI)