HLPD: Aligning LLMs to Human Language Preference for Machine-Revised Text Detection
DOI:
https://doi.org/10.1609/aaai.v40i36.40297Abstract
To prevent misinformation and social issues arising from trustworthy-looking content generated by LLMs, it is crucial to develop efficient and reliable methods for identifying the source of texts. Previous approaches have demonstrated exceptional performance in detecting texts fully generated by LLMs. However, these methods struggle when confronting more advanced LLM output or text with adversarial multi task machine-revision, especially in the black-box setting, where the generating model is unknown. To address this challenge, grounded in the hypothesis that human writing possesses consistent, distinctive stylistic patterns, we propose Human Language Preference Detection (HLPD). HLPD employs a reward‐based alignment process, Human Language Preference Optimization (HLPO), to shift the scoring model’s token distribution toward human‐like writing, making the model more sensitive to human writing, therefore enhancing the identification of machine-revised text. We test HLPD in an adversarial multi‑task evaluation framework that leverages a five‑dimensional prompt generator and multiple advanced LLMs to create diverse revision scenarios. When detecting texts revised by GPT-series models, HLPD achieves a 15.11% relative improvement in AUROC over ImBD, surpassing Fast-DetectGPT by 45.56%. When evaluated in texts generated by advanced LLMs, HLPD achieves the highest average AUROC, exceeding ImBD by 5.53% and Fast-DetectGPT by 34.14%.Published
2026-03-14
How to Cite
Dai, F., Jiang, X., & Deng, Z. (2026). HLPD: Aligning LLMs to Human Language Preference for Machine-Revised Text Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 40(36), 30440-30448. https://doi.org/10.1609/aaai.v40i36.40297
Issue
Section
AAAI Technical Track on Natural Language Processing I