SpecDetect: Simple, Fast, and Training-Free Detection of LLM-Generated Text via Spectral Analysis
DOI:
https://doi.org/10.1609/aaai.v40i38.40510Abstract
The proliferation of high-quality text from Large Language Models (LLMs) demands reliable and efficient detection methods. While existing training-free approaches show promise, they often rely on surface-level statistics and overlook fundamental signal properties of the text generation process. In this work, we reframe detection as a signal processing problem, introducing a novel paradigm that analyzes the sequence of token log-probabilities in the frequency domain. By systematically analyzing the signal's spectral properties using the global Discrete Fourier Transform (DFT) and the local Short-Time Fourier Transform (STFT), we find that human-written text consistently exhibits significantly higher spectral energy. This higher energy reflects the larger-amplitude fluctuations inherent in human writing compared to the suppressed dynamics of LLM-generated text. Based on this key insight, we construct SpecDetect, a detector built on a single, robust feature from the global DFT: DFT total energy. We also propose an enhanced version, SpecDetect++, which incorporates a sampling discrepancy mechanism to further boost robustness. Extensive experiments show that our approach outperforms the state-of-the-art model while running in nearly half the time. Our work introduces a new, efficient, and interpretable pathway for LLM-generated text detection, showing that classical signal processing techniques offer a surprisingly powerful solution to this modern challenge.Downloads
Published
2026-03-14
How to Cite
Luo, H., Zhang, W., Wang, S., Zou, W., Lin, C., Meng, X., & Zhang, Y. (2026). SpecDetect: Simple, Fast, and Training-Free Detection of LLM-Generated Text via Spectral Analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 40(38), 32356–32364. https://doi.org/10.1609/aaai.v40i38.40510
Issue
Section
AAAI Technical Track on Natural Language Processing III