Self-Supervised Learning for Generalizable Out-of-Distribution Detection

Authors

  • Sina Mohseni Texas A&M University
  • Mandar Pitale Nvidia
  • JBS Yadawa Nvidia
  • Zhangyang Wang Texas A&M University

DOI:

https://doi.org/10.1609/aaai.v34i04.5966

Abstract

The real-world deployment of Deep Neural Networks (DNNs) in safety-critical applications such as autonomous vehicles needs to address a variety of DNNs' vulnerabilities, one of which being detecting and rejecting out-of-distribution outliers that might result in unpredictable fatal errors. We propose a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time. Our technique does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods. We perform multiple image classification experiments and observe our technique to perform favorably against state-of-the-art OOD detection methods. Interestingly, we witness that our method also reduces in-distribution classification risk via rejecting samples near the boundaries of the training set distribution.

Downloads

Published

2020-04-03

How to Cite

Mohseni, S., Pitale, M., Yadawa, J., & Wang, Z. (2020). Self-Supervised Learning for Generalizable Out-of-Distribution Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5216-5223. https://doi.org/10.1609/aaai.v34i04.5966

Issue

Section

AAAI Technical Track: Machine Learning