Weighted-Sampling Audio Adversarial Example Attack

Authors

  • Xiaolei Liu University of Electronic Science and Technology of China
  • Kun Wan University of California Santa Barbara
  • Yufei Ding University of California Santa Barbara
  • Xiaosong Zhang University of Electronic Science and Technology of China
  • Qingxin Zhu University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v34i04.5928

Abstract

Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Thorough studies on how to effectively generate adversarial examples are essential to prevent potential attacks. Despite many research on this, the efficiency and the robustness of existing works are not yet satisfactory. In this paper, we propose weighted-sampling audio adversarial examples, focusing on the numbers and the weights of distortion to reinforce the attack. Further, we apply a denoising method in the loss function to make the adversarial attack more imperceptible. Experiments show that our method is the first in the field to generate audio adversarial examples with low noise and high audio robustness at the minute time-consuming level 1.

Downloads

Published

2020-04-03

How to Cite

Liu, X., Wan, K., Ding, Y., Zhang, X., & Zhu, Q. (2020). Weighted-Sampling Audio Adversarial Example Attack. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4908-4915. https://doi.org/10.1609/aaai.v34i04.5928

Issue

Section

AAAI Technical Track: Machine Learning