LoopLLM: Transferable Energy-Latency Attacks in LLMs via Repetitive Generation

Authors

  • Xingyu Li National Interdisciplinary Research Center of Engineering Physics Institute of Computer Application, China Academy of Engineering Physics
  • Xiaolei Liu National Interdisciplinary Research Center of Engineering Physics Institute of Computer Application, China Academy of Engineering Physics
  • Cheng Liu National Interdisciplinary Research Center of Engineering Physics Institute of Computer Application, China Academy of Engineering Physics
  • Yixiao Xu Beijing University of Posts and Telecommunications
  • Kangyi Ding National Interdisciplinary Research Center of Engineering Physics Institute of Computer Application, China Academy of Engineering Physics
  • Bangzhou Xin National Interdisciplinary Research Center of Engineering Physics Institute of Computer Application, China Academy of Engineering Physics
  • Jia-Li Yin Fuzhou University

DOI:

https://doi.org/10.1609/aaai.v40i38.40445

Abstract

As large language models (LLMs) scale, their inference incurs substantial computational resources, exposing them to energy-latency attacks, where crafted prompts induce high energy and latency cost. Existing attack methods aim to prolong output by delaying the generation of termination symbols. However, as the output grows longer, controlling the termination symbols through input becomes difficult, making these methods less effective. Therefore, we propose LoopLLM, an energy-latency attack framework based on the observation that repetitive generation can trigger low-entropy decoding loops, reliably compelling LLMs to generate until their output limits. LoopLLM introduces (1) a repetition-inducing prompt optimization that exploits autoregressive vulnerabilities to induce repetitive generation, and (2) a token-aligned ensemble optimization that aggregates gradients to improve cross-model transferability. Extensive experiments on 12 open-source and 2 commercial LLMs show that LoopLLM significantly outperforms existing methods, achieving over 90% of the maximum output length, compared to 20% for baselines, and improving transferability by around 40% to DeepSeek-V3 and Gemini 2.5 Flash.

Downloads

Published

2026-03-14

How to Cite

Li, X., Liu, X., Liu, C., Xu, Y., Ding, K., Xin, B., & Yin, J.-L. (2026). LoopLLM: Transferable Energy-Latency Attacks in LLMs via Repetitive Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 31770–31777. https://doi.org/10.1609/aaai.v40i38.40445

Issue

Section

AAAI Technical Track on Natural Language Processing III