Fraud’s Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling (Student Abstract)

Authors

  • Mingze Ni University of Techonology Sydney
  • Zhensu Sun ShanghaiTech University
  • Wei Liu University of Technology Sydney

DOI:

https://doi.org/10.1609/aaai.v37i13.27005

Keywords:

Adversarial Learning, Textual Attack, Natural Language Processing.

Abstract

Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic heuristic rules that are agnostic to the optimal adversarial examples, a strategy that often results in attack failures. To this end, this research proposes Fraud's Bargain Attack (FBA), which utilizes a novel randomization mechanism to enlarge the searching space and enables high-quality adversarial examples to be generated with high probabilities. FBA applies the Metropolis-Hasting algorithm to enhance the selection of adversarial examples from all candidates proposed by a customized Word Manipulation Process (WMP). WMP perturbs one word at a time via insertion, removal, or substitution in a contextual-aware manner. Extensive experiments demonstrate that FBA outperforms the baselines in terms of attack success rate and imperceptibility.

Downloads

Published

2023-09-06

How to Cite

Ni, M., Sun, Z., & Liu, W. (2023). Fraud’s Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16290-16291. https://doi.org/10.1609/aaai.v37i13.27005