Attention-Conditioned Augmentations for Self-Supervised Anomaly Detection and Localization

Authors

  • Behzad Bozorgtabar EPFL CHUV
  • Dwarikanath Mahapatra Inception Institute of Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v37i12.26720

Keywords:

General

Abstract

Self-supervised anomaly detection and localization are critical to real-world scenarios in which collecting anomalous samples and pixel-wise labeling is tedious or infeasible, even worse when a wide variety of unseen anomalies could surface at test time. Our approach involves a pretext task in the context of masked image modeling, where the goal is to impose agreement between cluster assignments obtained from the representation of an image view containing saliency-aware masked patches and the uncorrupted image view. We harness the self-attention map extracted from the transformer to mask non-salient image patches without destroying the crucial structure associated with the foreground object. Subsequently, the pre-trained model is fine-tuned to detect and localize simulated anomalies generated under the guidance of the transformer's self-attention map. We conducted extensive validation and ablations on the benchmark of industrial images and achieved superior performance against competing methods. We also show the adaptability of our method to the medical images of the chest X-rays benchmark.

Downloads

Published

2023-06-26

How to Cite

Bozorgtabar, B., & Mahapatra, D. (2023). Attention-Conditioned Augmentations for Self-Supervised Anomaly Detection and Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14720-14728. https://doi.org/10.1609/aaai.v37i12.26720

Issue

Section

AAAI Special Track on Safe and Robust AI