CharBench: Evaluating the Role of Tokenization in Character-Level Tasks

Authors

  • Omri Uzan Stanford University
  • Yuval Pinter Faculty of Computer and Information Science, Ben-Gurion University of the Negev

DOI:

https://doi.org/10.1609/aaai.v40i39.40615

Abstract

Tasks that require character-level reasoning, such as counting or locating characters within words, remain challenging for contemporary language models. A common conjecture is that language models' reliance on subword units, rather than characters, contributes to their struggles with character-level tasks, yet recent studies offer conflicting conclusions about the role of tokenization, leaving its impact unclear. To address this gap, we introduce CharBench, a comprehensive benchmark of character-level tasks that is two orders of magnitude larger than existing alternatives. We evaluate a diverse range of leading open-weight and proprietary models on CharBench and find that it presents a significant challenge to modern LLMs, with average accuracies of 43.6% and 32.3% on some tasks. We present an in-depth analysis of how intrinsic properties of words and their segmentations into tokens correspond to model performance. For counting tasks, we find that tokenization properties are weakly correlated with correctness, while the length of the queried word and the actual character count play a more significant part. In contrast, for tasks requiring intra-word positional understanding, performance is negatively correlated with the length of the token containing the queried character, suggesting that longer tokens obscure information on character position for LLMs. We encourage future work to build on the benchmark and evaluation methodology introduced here as tools for improving model performance on these tasks.

Published

2026-03-14

How to Cite

Uzan, O., & Pinter, Y. (2026). CharBench: Evaluating the Role of Tokenization in Character-Level Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33296-33304. https://doi.org/10.1609/aaai.v40i39.40615

Issue

Section

AAAI Technical Track on Natural Language Processing IV