Holistic Adversarial Robustness of Deep Learning Models

Authors

  • Pin-Yu Chen IBM Research
  • Sijia Liu Michigan State University

DOI:

https://doi.org/10.1609/aaai.v37i13.26797

Keywords:

Adversarial Robustness, Deep Learning, Attack, Defense, Verification

Abstract

Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. With the proliferation of deep-learning-based technology, the potential risks associated with model development and deployment can be amplified and become dreadful vulnerabilities. This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification, and novel applications.

Downloads

Published

2023-09-06

How to Cite

Chen, P.-Y., & Liu, S. (2023). Holistic Adversarial Robustness of Deep Learning Models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 15411-15420. https://doi.org/10.1609/aaai.v37i13.26797