Attack-inspired Calibration Loss for Calibrating Crack Recognition

Authors

  • Zhuangzhuang Chen Shenzhen University
  • Qiangyu Chen Shenzhen University
  • Jiahao Zhang Shenzhen University
  • Zhiliang Lin Shenzhen University
  • Xingyu Feng Shenzhen University
  • Jie Chen Shenzhen University
  • Jianqiang Li Shenzhen University

DOI:

https://doi.org/10.1609/aaai.v39i15.33755

Abstract

Deep neural networks (DNNs) have substantially achieved high predictive accuracy in many vision tasks. However, we find that they are poorly calibrated for crack recognition tasks, as these DNNs tend to produce both under-confident and over-confident predictions in such safety-critical applications, thereby limiting their practical use in real-world scenarios. To address this issue, we propose a novel attack-inspired calibration loss (AICL) that explicitly regularizes class probabilities to be better confidence estimation. Specifically, we first propose the attack-inspired correctness estimation method (ACE) that aims to estimate the correctness degree of each sample via adversarial attacks. Then, we propose Correctness-aware Distribution Guidance, which starts from a distribution perspective that enforces the ordinal ranking of the predicted confidence referring to the estimated correctness degree. The proposed method can be conveniently implemented on top of any DNNs-based crack recognition model by serving as a plug-and-play loss function. To address the limited availability of related benchmarks, we collect a fully annotated dataset, namely, Bridge2024, which involves inconsistent cracks and noisy backgrounds in real-world bridges. Our AICL outperforms the state-of-art calibration methods on various benchmark datasets including CRACK2019, SDNET2018, and our BRIDGE2024.

Downloads

Published

2025-04-11

How to Cite

Chen, Z., Chen, Q., Zhang, J., Lin, Z., Feng, X., Chen, J., & Li, J. (2025). Attack-inspired Calibration Loss for Calibrating Crack Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 15984-15992. https://doi.org/10.1609/aaai.v39i15.33755

Issue

Section

AAAI Technical Track on Machine Learning I