Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems

Authors

  • Haowei Wang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Rupeng Zhang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Junjie Wang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Mingyang Li State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Yuekai Huang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Dandan Wang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China
  • Qing Wang State Key Laboratory of Complex System Modeling and Simulation Technology, Beijing, China Institute of Software, Chinese Academy of Sciences, Beijing, China University of Chinese Academy of Sciences, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v40i42.40893

Abstract

Retrieval-Augmented Generation (RAG) systems enhance Large Language Models (LLMs) by retrieving relevant documents from external corpora before generating responses. This approach significantly expands LLM capabilities by leveraging vast, up-to-date external knowledge. However, this reliance on external knowledge makes RAG systems vulnerable to corpus poisoning attacks that manipulate generated outputs via poisoned document injection. Existing poisoning attack strategies typically treat the retrieval and generation stages as disjointed, limiting their effectiveness. We propose Joint-GCG, the first framework to unify gradient-based attacks across both retriever and generator models through three innovations: (1) Cross-Vocabulary Projection for aligning embedding spaces, (2) Gradient Tokenization Alignment for synchronizing token-level gradient signals, and (3) Adaptive Weighted Fusion for dynamically balancing attacking objectives. Evaluations demonstrate that Joint-GCG achieves at most 25% and an average of 5% higher attack success rate than previous methods across multiple retrievers and generators. While optimized under a white-box assumption, the generated poisons show unprecedented transferability to unseen models. Joint-GCG's innovative unification of gradient-based attacks across retrieval and generation stages fundamentally reshapes our understanding of vulnerabilities within RAG systems.

Downloads

Published

2026-03-14

How to Cite

Wang, H., Zhang, R., Wang, J., Li, M., Huang, Y., Wang, D., & Wang, Q. (2026). Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35793–35801. https://doi.org/10.1609/aaai.v40i42.40893

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI