Generating Universal Adversarial Perturbations for Quantum Classifiers

Authors

  • Gautham Anil Indian Institute of Technology Madras
  • Vishnu Vinod Indian Institute of Technology Madras
  • Apurva Narayan University of Western Ontario University of British Columbia University of Waterloo

DOI:

https://doi.org/10.1609/aaai.v38i10.28963

Keywords:

ML: Quantum Machine Learning, ML: Adversarial Learning & Robustness

Abstract

Quantum Machine Learning (QML) has emerged as a promising field of research, aiming to leverage the capabilities of quantum computing to enhance existing machine learning methodologies. Recent studies have revealed that, like their classical counterparts, QML models based on Parametrized Quantum Circuits (PQCs) are also vulnerable to adversarial attacks. Moreover, the existence of Universal Adversarial Perturbations (UAPs) in the quantum domain has been demonstrated theoretically in the context of quantum classifiers. In this work, we introduce QuGAP: a novel framework for generating UAPs for quantum classifiers. We conceptualize the notion of additive UAPs for PQC-based classifiers and theoretically demonstrate their existence. We then utilize generative models (QuGAP-A) to craft additive UAPs and experimentally show that quantum classifiers are susceptible to such attacks. Moreover, we formulate a new method for generating unitary UAPs (QuGAP-U) using quantum generative models and a novel loss function based on fidelity constraints. We evaluate the performance of the proposed framework and show that our method achieves state-of-the-art misclassification rates, while maintaining high fidelity between legitimate and adversarial samples.

Downloads

Published

2024-03-24

How to Cite

Anil, G., Vinod, V., & Narayan, A. (2024). Generating Universal Adversarial Perturbations for Quantum Classifiers. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 10891-10899. https://doi.org/10.1609/aaai.v38i10.28963

Issue

Section

AAAI Technical Track on Machine Learning I