Self-Supervised Learning for Generalizable Out-of-Distribution Detection


  • Sina Mohseni Texas A&M University
  • Mandar Pitale Nvidia
  • JBS Yadawa Nvidia
  • Zhangyang Wang Texas A&M University



The real-world deployment of Deep Neural Networks (DNNs) in safety-critical applications such as autonomous vehicles needs to address a variety of DNNs' vulnerabilities, one of which being detecting and rejecting out-of-distribution outliers that might result in unpredictable fatal errors. We propose a new technique relying on self-supervision for generalizable out-of-distribution (OOD) feature learning and rejecting those samples at the inference time. Our technique does not need to pre-know the distribution of targeted OOD samples and incur no extra overheads compared to other methods. We perform multiple image classification experiments and observe our technique to perform favorably against state-of-the-art OOD detection methods. Interestingly, we witness that our method also reduces in-distribution classification risk via rejecting samples near the boundaries of the training set distribution.




How to Cite

Mohseni, S., Pitale, M., Yadawa, J., & Wang, Z. (2020). Self-Supervised Learning for Generalizable Out-of-Distribution Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5216-5223.



AAAI Technical Track: Machine Learning