OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially Generated Examples
DOI:
https://doi.org/10.1609/aaai.v38i19.30120Keywords:
GeneralAbstract
Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not been assumed in previous studies. In this paper, we propose OUTFOX, a framework that improves the robustness of LLM-generated-text detectors by allowing both the detector and the attacker to consider each other's output. In this framework, the attacker uses the detector's prediction labels as examples for in-context learning and adversarially generates essays that are harder to detect, while the detector uses the adversarially generated essays as examples for in-context learning to learn to detect essays from a strong attacker. Experiments in the domain of student essays show that the proposed detector improves the detection performance on the attacker-generated texts by up to +41.3 points F1-score. Furthermore, the proposed detector shows a state-of-the-art detection performance: up to 96.9 points F1-score, beating existing detectors on non-attacked texts. Finally, the proposed attacker drastically degrades the performance of detectors by up to -57.0 points F1-score, massively outperforming the baseline paraphrasing method for evading detection.Downloads
Published
2024-03-24
How to Cite
Koike, R., Kaneko, M., & Okazaki, N. (2024). OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially Generated Examples. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21258-21266. https://doi.org/10.1609/aaai.v38i19.30120
Issue
Section
AAAI Technical Track on Safe, Robust and Responsible AI Track