Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues

Authors

  • Yan Zhang Institute of Information Engineering, Chinese Academy of Sciences, China School of Cyber Security, University of Chinese Academy of Sciences, China
  • Gangyan Zeng School of Cyber Science and Engineering, Nanjing University of Science and Technology, China
  • Huawen Shen Institute of Information Engineering, Chinese Academy of Sciences, China School of Cyber Security, University of Chinese Academy of Sciences, China
  • Daiqing Wu Institute of Information Engineering, Chinese Academy of Sciences, China School of Cyber Security, University of Chinese Academy of Sciences, China
  • Yu Zhou VCIP & TMCC & DISSec, College of Computer Science, Nankai University, China
  • Can Ma Institute of Information Engineering, Chinese Academy of Sciences, China School of Cyber Security, University of Chinese Academy of Sciences, China

DOI:

https://doi.org/10.1609/aaai.v39i10.33115

Abstract

Video text-based visual question answering (TextVQA) is a practical task that aims to answer questions by jointly reasoning textual and visual information in a given video. Inspired by the development of TextVQA in image domain, existing Video TextVQA approaches leverage a language model (e.g. T5) to process text-rich multiple frames and generate answers auto-regressively. Nevertheless, the spatio-temporal relationships among visual entities (including scene text and objects) will be disrupted and models are susceptible to interference from unrelated information, resulting in irrational reasoning and inaccurate answering. To tackle these challenges, we propose the TEA (stands for "Track the Answer'') method that better extends the generative TextVQA framework from image to video. TEA recovers the spatio-temporal relationships in a complementary way and incorporates OCR-aware clues to enhance the quality of reasoning questions. Extensive experiments on several public Video TextVQA datasets validate the effectiveness and generalization of our framework. TEA outperforms existing TextVQA methods, video-language pretraining methods and video large language models by great margins. The code will be publicly released.

Downloads

Published

2025-04-11

How to Cite

Zhang, Y., Zeng, G., Shen, H., Wu, D., Zhou, Y., & Ma, C. (2025). Track the Answer: Extending TextVQA from Image to Video with Spatio-Temporal Clues. Proceedings of the AAAI Conference on Artificial Intelligence, 39(10), 10275-10283. https://doi.org/10.1609/aaai.v39i10.33115

Issue

Section

AAAI Technical Track on Computer Vision IX