EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation

Authors

  • Qi Zhou Hangzhou Dianzi University
  • Haipeng Chen Harvard University
  • Yitao Zheng Hangzhou Dianzi University
  • Zhen Wang Hangzhou Dianzi University

DOI:

https://doi.org/10.1609/aaai.v35i16.17716

Keywords:

Adversarial Attacks & Robustness

Abstract

As one of the most powerful topic models, Latent Dirichlet Allocation (LDA) has been used in a vast range of tasks, including document understanding, information retrieval and peer-reviewer assignment. Despite its tremendous popularity, the security of LDA has rarely been studied. This poses severe risks to security-critical tasks such as sentiment analysis and peer-reviewer assignment that are based on LDA. In this paper, we are interested in knowing whether LDA models are vulnerable to adversarial perturbations of benign document examples during inference time. We formalize the evasion attack to LDA models as an optimization problem and prove it to be NP-hard. We then propose a novel and efficient algorithm, EvaLDA to solve it. We show the effectiveness of EvaLDA via extensive empirical evaluations. For instance, in the NIPS dataset, EvaLDA can averagely promote the rank of a target topic from 10 to around 7 by only replacing 1% of the words with similar words in a victim document. Our work provides significant insights into the power and limitations of evasion attacks to LDA models.

Downloads

Published

2021-05-18

How to Cite

Zhou, Q., Chen, H., Zheng, Y., & Wang, Z. (2021). EvaLDA: Efficient Evasion Attacks Towards Latent Dirichlet Allocation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14602-14611. https://doi.org/10.1609/aaai.v35i16.17716

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III