Adversarial Self-Attention for Language Understanding

Authors

  • Hongqiu Wu Shanghai Jiao Tong University
  • Ruixue Ding Alibaba Group
  • Hai Zhao Shanghai Jiao Tong University
  • Pengjun Xie Alibaba Group
  • Fei Huang Alibaba Group
  • Min Zhang Soochow University

DOI:

https://doi.org/10.1609/aaai.v37i11.26608

Keywords:

SNLP: Adversarial Attacks & Robustness, SNLP: Language Models

Abstract

Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose Adversarial Self-Attention mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gain compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.

Downloads

Published

2023-06-26

How to Cite

Wu, H., Ding, R., Zhao, H., Xie, P., Huang, F., & Zhang, M. (2023). Adversarial Self-Attention for Language Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13727-13735. https://doi.org/10.1609/aaai.v37i11.26608

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing