TongUI: Internet-Scale Trajectories from Multimodal Web Tutorials for Generalized GUI Agents

Authors

  • Bofei Zhang State Key Laboratory for General Artificial Intelligence, BIGAI
  • Zirui Shang Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology State Key Laboratory for General Artificial Intelligence, BIGAI
  • Zhi Gao Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology State Key Laboratory for General Artificial Intelligence, BIGAI School of Intelligence Science and Technology, Peking University
  • Wang Zhang State Key Laboratory for General Artificial Intelligence, BIGAI
  • Rui Xie State Key Laboratory for General Artificial Intelligence, BIGAI Shanghai Jiaotong University
  • Xiaojian Ma State Key Laboratory for General Artificial Intelligence, BIGAI
  • Tao Yuan State Key Laboratory for General Artificial Intelligence, BIGAI
  • Xinxiao Wu Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology
  • Song-Chun Zhu State Key Laboratory for General Artificial Intelligence, BIGAI School of Intelligence Science and Technology, Peking University Department of Automation, Tsinghua University
  • Qing Li State Key Laboratory for General Artificial Intelligence, BIGAI

DOI:

https://doi.org/10.1609/aaai.v40i15.38229

Abstract

Building Graphical User Interface (GUI) agents is a promising research direction, which simulates human interaction with computers or mobile phones to perform diverse GUI tasks. However, a major challenge in developing generalized GUI agents is the lack of sufficient trajectory data across various operating systems and applications, mainly due to the high cost of manual annotations. In this paper, we propose the TongUI framework that transforms millions of multimodal web tutorials into GUI trajectories for generalized GUI agents. Concretely, we crawl GUI videos and articles from the Internet and process them into GUI agent trajectory data. Based on this, we construct the GUI-Net-1M dataset, which contains 1 million trajectories across five operating systems and over 280 applications. To the best of our knowledge, this is the largest open-source GUI trajectory dataset. We develop the TongUI agent by fine-tuning Qwen2.5-VL-3B/7B/32B models on GUI-Net-1M, which shows consistent performance improvements on commonly used grounding and navigation benchmarks, outperforming baseline agents by 10\% on multiple benchmarks, showing the effectiveness of the GUI-Net-1M dataset and underscoring the significance of our TongUI framework.

Downloads

Published

2026-03-14

How to Cite

Zhang, B., Shang, Z., Gao, Z., Zhang, W., Xie, R., Ma, X., … Li, Q. (2026). TongUI: Internet-Scale Trajectories from Multimodal Web Tutorials for Generalized GUI Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 40(15), 12367–12375. https://doi.org/10.1609/aaai.v40i15.38229

Issue

Section

AAAI Technical Track on Computer Vision XII