Holistic Adversarial Robustness of Deep Learning Models
DOI:
https://doi.org/10.1609/aaai.v37i13.26797Keywords:
Adversarial Robustness, Deep Learning, Attack, Defense, VerificationAbstract
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. With the proliferation of deep-learning-based technology, the potential risks associated with model development and deployment can be amplified and become dreadful vulnerabilities. This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification, and novel applications.Downloads
Published
2023-09-06
How to Cite
Chen, P.-Y., & Liu, S. (2023). Holistic Adversarial Robustness of Deep Learning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15411-15420. https://doi.org/10.1609/aaai.v37i13.26797
Issue
Section
Senior Member Presentation: Summary Papers