O3SLM: Open Weight, Open Data, and Open Vocabulary Sketch-Language Model

Authors

  • Rishi Gupta Indian Institute of Science
  • Mukilan Karuppasamy Indian Institute of Science
  • Shyam Marjit Indian Institute of Science
  • Aditay Tripathi Indian Institute of Science
  • Anirban Chakraborty Indian Institute of Science

DOI:

https://doi.org/10.1609/aaai.v40i6.42450

Abstract

While Large Vision Language Models (LVLMs) are increasingly deployed in real-world applications, their ability to interpret abstract visual inputs remains limited. Specifically, they struggle to comprehend hand-drawn sketches, a modality that offers an intuitive means of expressing concepts that are difficult to describe textually. We identify the primary bottleneck as the absence of a large-scale dataset that jointly models sketches, photorealistic images, and corresponding natural language instructions. To address this, we present two key contributions: (1) a new, large-scale dataset of image-sketch-instruction triplets designed to facilitate both pretraining and instruction tuning, and (2) O3SLM, an LVLM trained on this dataset. Comprehensive evaluations on multiple sketch-based tasks: (a) object localization, (b) counting, (c) image retrieval i.e., (SBIR and fine-grained SBIR), and (d) visual question answering (VQA); while incorporating the three existing sketch datasets, namely QuickDraw!, Sketchy, and Tu-Berlin, along with our generated SketchVCL dataset, show that O3SLM achieves state-of-the-art performance, substantially outperforming existing LVLMs in sketch comprehension and reasoning.

Downloads

Published

2026-03-14

How to Cite

Gupta, R., Karuppasamy, M., Marjit, S., Tripathi, A., & Chakraborty, A. (2026). O3SLM: Open Weight, Open Data, and Open Vocabulary Sketch-Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 40(6), 4511–4519. https://doi.org/10.1609/aaai.v40i6.42450

Issue

Section

AAAI Technical Track on Computer Vision III