Fraud’s Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling (Student Abstract)


  • Mingze Ni University of Techonology Sydney
  • Zhensu Sun ShanghaiTech University
  • Wei Liu University of Technology Sydney



Adversarial Learning, Textual Attack, Natural Language Processing.


Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic heuristic rules that are agnostic to the optimal adversarial examples, a strategy that often results in attack failures. To this end, this research proposes Fraud's Bargain Attack (FBA), which utilizes a novel randomization mechanism to enlarge the searching space and enables high-quality adversarial examples to be generated with high probabilities. FBA applies the Metropolis-Hasting algorithm to enhance the selection of adversarial examples from all candidates proposed by a customized Word Manipulation Process (WMP). WMP perturbs one word at a time via insertion, removal, or substitution in a contextual-aware manner. Extensive experiments demonstrate that FBA outperforms the baselines in terms of attack success rate and imperceptibility.




How to Cite

Ni, M., Sun, Z., & Liu, W. (2023). Fraud’s Bargain Attacks to Textual Classifiers via Metropolis-Hasting Sampling (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 37(13), 16290-16291.