BERT & Family Eat Word Salad: Experiments with Text Understanding

Authors

  • Ashim Gupta University of Utah
  • Giorgi Kvernadze University of Utah
  • Vivek Srikumar University of Utah

DOI:

https://doi.org/10.1609/aaai.v35i14.17531

Keywords:

Adversarial Attacks & Robustness, Interpretaility & Analysis of NLP Models

Abstract

In this paper, we study the response of large models from the BERT family to incoherent inputs that should confuse any model that claims to understand natural language. We define simple heuristics to construct such examples. Our experiments show that state-of-the-art models consistently fail to recognize them as ill-formed, and instead produce high confidence predictions on them. As a consequence of this phenomenon, models trained on sentences with randomly permuted word order perform close to state-of-the-art models. To alleviate these issues, we show that if models are explicitly trained to recognize invalid inputs, they can be robust to such attacks without a drop in performance.

Downloads

Published

2021-05-18

How to Cite

Gupta, A., Kvernadze, G., & Srikumar, V. (2021). BERT & Family Eat Word Salad: Experiments with Text Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12946-12954. https://doi.org/10.1609/aaai.v35i14.17531

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I