Defending against Backdoor Attacks in Natural Language Generation

Authors

  • Xiaofei Sun Zhejiang University
  • Xiaoya Li Shannon.AI
  • Yuxian Meng Shannon.AI
  • Xiang Ao Chinese Academy of Sciences
  • Lingjuan Lyu Sony AI
  • Jiwei Li Shannon.AI Zhejiang University
  • Tianwei Zhang Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v37i4.25656

Keywords:

APP: Security, SNLP: Bias, Fairness, Transparency & Privacy, SNLP: Adversarial Attacks & Robustness, SNLP: Generation

Abstract

The frustratingly fragile nature of neural network models make current natural language generation (NLG) systems prone to backdoor attacks and generate malicious sequences that could be sexist or offensive. Unfortunately, little effort has been invested to how backdoor attacks can affect current NLG models and how to defend against these attacks. In this work, by giving a formal definition of backdoor attack and defense, we investigate this problem on two important NLG tasks, machine translation and dialog generation. Tailored to the inherent nature of NLG models (e.g., producing a sequence of coherent words given contexts), we design defending strategies against attacks. We find that testing the backward probability of generating sources given targets yields effective defense performance against all different types of attacks, and is able to handle the one-to-many issue in many NLG tasks such as dialog generation. We hope that this work can raise the awareness of backdoor risks concealed in deep NLG systems and inspire more future work (both attack and defense) towards this direction.

Downloads

Published

2023-06-26

How to Cite

Sun, X., Li, X., Meng, Y., Ao, X., Lyu, L., Li, J., & Zhang, T. (2023). Defending against Backdoor Attacks in Natural Language Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 5257-5265. https://doi.org/10.1609/aaai.v37i4.25656

Issue

Section

AAAI Technical Track on Domain(s) of Application