Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps


  • Qi Zhu Northwestern Polytechnical University
  • Chenyu Gao Northwestern Polytechnical University
  • Peng Wang Northwestern Polytechnical University
  • Qi Wu University of Adelaide


Language and Vision


Texts appearing in daily scenes that can be recognized by OCR (Optical Character Recognition) tools contain significant information, such as street name, product brand and prices. Two tasks -- text-based visual question answering and text-based image captioning, with a text extension from existing vision-language applications, are catching on rapidly. To address these problems, many sophisticated multi-modality encoding frameworks (such as heterogeneous graph structure) are being used. In this paper, we argue that a simple attention mechanism can do the same or even better job without any bells and whistles. Under this mechanism, we simply split OCR token features into separate visual- and linguistic-attention branches, and send them to a popular Transformer decoder to generate answers or captions. Surprisingly, we find this simple baseline model is rather strong -- it consistently outperforms state-of-the-art (SOTA) models on two popular benchmarks, TextVQA and all three tasks of ST-VQA, although these SOTA models use far more complex encoding mechanisms. Transferring it to text-based image captioning, we also surpass the TextCaps Challenge 2020 winner. We wish this work to set the new baseline for these two OCR text related applications and to inspire new thinking of multi-modality encoder design. Code is available at




How to Cite

Zhu, Q., Gao, C., Wang, P., & Wu, Q. (2021). Simple is not Easy: A Simple Strong Baseline for TextVQA and TextCaps. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3608-3615. Retrieved from



AAAI Technical Track on Computer Vision III