Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30468Keywords:
Adversarial Attacks, Histopathology, Convolutional Neural NetworksAbstract
Convolutional neural networks (CNNs) are being increasingly adopted in medical imaging. However, in the race for developing accurate models, their robustness is often overlooked. This elicits a significant concern given the safety-critical nature of the healthcare system. Here, we highlight the vulnerability of CNNs against a sporadic and naturalistic adversarial patch attack (SNAP). We train SNAP to mislead the ResNet50 model predicting metastasis in histopathological scans of lymph node sections, lowering the accuracy by 27%. This work emphasizes the need for defense strategies before deploying CNNs in critical healthcare settings.Downloads
Published
2024-03-24
How to Cite
Kumar, D., Sharma, A., & Narayan, A. (2024). Attacking CNNs in Histopathology with SNAP: Sporadic and Naturalistic Adversarial Patches (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23550-23551. https://doi.org/10.1609/aaai.v38i21.30468
Issue
Section
AAAI Student Abstract and Poster Program