Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation

Authors

  • Hanjie Chen University of Virginia
  • Yangfeng Ji University of Virginia

DOI:

https://doi.org/10.1609/aaai.v36i10.21289

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Neural language models show vulnerability to adversarial examples which are semantically similar to their original counterparts with a few words replaced by their synonyms. A common way to improve model robustness is adversarial training which follows two steps—collecting adversarial examples by attacking a target model, and fine-tuning the model on the augmented dataset with these adversarial examples. The objective of traditional adversarial training is to make a model produce the same correct predictions on an original/adversarial example pair. However, the consistency between model decision-makings on two similar texts is ignored. We argue that a robust model should behave consistently on original/adversarial example pairs, that is making the same predictions (what) based on the same reasons (how) which can be reflected by consistent interpretations. In this work, we propose a novel feature-level adversarial training method named FLAT. FLAT aims at improving model robustness in terms of both predictions and interpretations. FLAT incorporates variational word masks in neural networks to learn global word importance and play as a bottleneck teaching the model to make predictions based on important words. FLAT explicitly shoots at the vulnerability problem caused by the mismatch between model understandings on the replaced words and their synonyms in original/adversarial example pairs by regularizing the corresponding global word importance scores. Experiments show the effectiveness of FLAT in improving the robustness with respect to both predictions and interpretations of four neural network models (LSTM, CNN, BERT, and DeBERTa) to two adversarial attacks on four text classification tasks. The models trained via FLAT also show better robustness than baseline models on unforeseen adversarial examples across different attacks.

Downloads

Published

2022-06-28

How to Cite

Chen, H., & Ji, Y. (2022). Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10463-10472. https://doi.org/10.1609/aaai.v36i10.21289

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing