Word Level Robustness Enhancement: Fight Perturbation with Perturbation

Authors

  • Pei Huang Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yuting Yang Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Fuqi Jia Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Minghao Liu Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Feifei Ma Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Jian Zhang Institute of Software, Chinese Academy of Sciences University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v36i10.21324

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

State-of-the-art deep NLP models have achieved impressive improvements on many tasks. However, they are found to be vulnerable to some perturbations. Before they are widely adopted, the fundamental issues of robustness need to be addressed. In this paper, we design a robustness enhancement method to defend against word substitution perturbation, whose basic idea is to fight perturbation with perturbation. We find that: although many well-trained deep models are not robust in the setting of the presence of adversarial samples, they satisfy weak robustness. That means they can handle most non-crafted perturbations well. Taking advantage of the weak robustness property of deep models, we utilize non-crafted perturbations to resist the adversarial perturbations crafted by attackers. Our method contains two main stages. The first stage is using randomized perturbation to conform the input to the data distribution. The second stage is using randomized perturbation to eliminate the instability of prediction results and enhance the robustness guarantee. Experimental results show that our method can significantly improve the ability of deep models to resist the state-of-the-art adversarial attacks while maintaining the prediction performance on the original clean data.

Downloads

Published

2022-06-28

How to Cite

Huang, P., Yang, Y., Jia, F., Liu, M., Ma, F., & Zhang, J. (2022). Word Level Robustness Enhancement: Fight Perturbation with Perturbation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10785-10793. https://doi.org/10.1609/aaai.v36i10.21324

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing