OpenVIS: Open-vocabulary Video Instance Segmentation

Authors

  • Pinxue Guo Academy for Engineering and Technology, Fudan University Amazon Web Services
  • Hao Huang Amazon Web Services
  • Peiyang He Amazon Web Services
  • Xuefeng Liu Amazon Web Services
  • Tianjun Xiao Amazon Web Services
  • Wenqiang Zhang Academy for Engineering and Technology, Fudan University School of Computer Science, Fudan University

DOI:

https://doi.org/10.1609/aaai.v39i3.32338

Abstract

Open-vocabulary Video Instance Segmentation (OpenVIS) can simultaneously detect, segment, and track arbitrary object categories in a video, without being constrained to categories seen during training. In this work, we propose InstFormer, a carefully designed framework for the OpenVIS task that achieves powerful open-vocabulary capabilities through lightweight fine-tuning with limited-category data. InstFormer begins with the open-world mask proposal network, encouraged to propose all potential instance class-agnostic masks by the contrastive instance margin loss. Next, we introduce InstCLIP, adapted from pre-trained CLIP with Instance Guidance Attention, which encodes open-vocabulary instance tokens efficiently. These instance tokens not only enable open-vocabulary classification but also offer strong universal tracking capabilities. Furthermore, to prevent the tracking module from being constrained by the training data with limited categories, we propose the universal rollout association, which transforms the tracking problem into predicting the next frame’s instance tracking token. The experimental results demonstrate the proposed InstFormer achieve state-of-the-art capabilities on a comprehensive OpenVIS evaluation benchmark, while also achieves competitive performance in fully supervised VIS task.

Downloads

Published

2025-04-11

How to Cite

Guo, P., Huang, H., He, P., Liu, X., Xiao, T., & Zhang, W. (2025). OpenVIS: Open-vocabulary Video Instance Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(3), 3275–3283. https://doi.org/10.1609/aaai.v39i3.32338

Issue

Section

AAAI Technical Track on Computer Vision II