GlitchMiner: Mining Glitch Tokens in Large Language Models via Gradient-based Discrete Optimization

Authors

  • Zihui Wu School of Computer Science and Technology, Xidian University
  • Haichang Gao School of Computer Science and Technology, Xidian University
  • Ping Wang School of Computer Science and Technology, Xidian University
  • Shudong Zhang School of Computer Science and Technology, Xidian University
  • Zhaoxiang Liu Data Science & Artifcial Intelligence Research Institute, China Unicom; Unicom Data Intelligence, China Unicom
  • Shiguo Lian Data Science & Artifcial Intelligence Research Institute, China Unicom; Unicom Data Intelligence, China Unicom

DOI:

https://doi.org/10.1609/aaai.v40i40.40693

Abstract

Glitch tokens—inputs that trigger unpredictable or anomalous behavior in Large Language Models (LLMs)—pose significant challenges to model reliability and safety. Existing detection methods primarily rely on heuristic embedding patterns or statistical anomalies within internal representations, limiting their generalizability across different model architectures and potentially missing anomalies that deviate from observed patterns. We introduce GlitchMiner, an behavior-driven framework designed to identify glitch tokens by maximizing predictive entropy. Leveraging a gradient-guided local search strategy, GlitchMiner efficiently explores the discrete token space without relying on model-specific heuristics or large-batch sampling. Extensive experiments across ten LLMs from five major model families demonstrate that GlitchMiner consistently outperforms existing approaches in detection accuracy and query efficiency, providing a generalizable and scalable solution for effective glitch token discovery.

Downloads

Published

2026-03-14

How to Cite

Wu, Z., Gao, H., Wang, P., Zhang, S., Liu, Z., & Lian, S. (2026). GlitchMiner: Mining Glitch Tokens in Large Language Models via Gradient-based Discrete Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 33998-34005. https://doi.org/10.1609/aaai.v40i40.40693

Issue

Section

AAAI Technical Track on Natural Language Processing V