CoSPED: Consistent Soft Prompt Targeted Data Extraction and Defense

Authors

  • Zhuochen Yang Cybersecurity Strategic Technology Centre, ST Engineering, Singapore
  • Kar Wai Fok Cybersecurity Strategic Technology Centre, ST Engineering, Singapore
  • Vrizlynn L. L. Thing Cybersecurity Strategic Technology Centre, ST Engineering, Singapore

DOI:

https://doi.org/10.1609/aaai.v40i44.41145

Abstract

Large language models have gained widespread attention recently, but their potential security vulnerabilities, especially privacy leakage, are also becoming apparent. To test and evaluate for data extraction risks in LLMs, we propose CoSPED, short for Consistent Soft Prompt Targeted Data Extraction and Defense. We introduce several innovative components, including Dynamic Loss, Additive Loss, Common Loss, and Self Consistency Decoding Strategy, and tested to enhance the consistency of the soft prompt tuning process. Through extensive experimentation with various combinations, we achieved an extraction rate of 65.2% at a 50-token prefix comparison. Our comparisons of CoSPED with other reference works confirm our superior extraction rates. We evaluate CoSPED on more scenarios, achieving Pythia model extraction rate of 51.7% and introducing cross-model comparison. Finally, we explore defense through Rank-One Model Editing and achieve a reduction in the extraction rate to 1.6%, which proves that our analysis of extraction mechanisms can directly inform effective mitigation strategies against soft prompt-based attacks.

Downloads

Published

2026-03-14

How to Cite

Yang, Z., Fok, K. W., & Thing, V. L. L. (2026). CoSPED: Consistent Soft Prompt Targeted Data Extraction and Defense. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 38075-38083. https://doi.org/10.1609/aaai.v40i44.41145

Issue

Section

AAAI Special Track on AI Alignment