InstanceFormer: An Online Video Instance Segmentation Framework

Authors

  • Rajat Koner Ludwig Maximilian University of Munich MCML
  • Tanveer Hannan Ludwig Maximilian University of Munich MCML
  • Suprosanna Shit Technical University of Munich
  • Sahand Sharifzadeh Ludwig Maximilian University of Munich
  • Matthias Schubert Ludwig Maximilian University of Munich MCML
  • Thomas Seidl Ludwig Maximilian University of Munich MCML
  • Volker Tresp Ludwig Maximilian University of Munich MCML

DOI:

https://doi.org/10.1609/aaai.v37i1.25201

Keywords:

CV: Segmentation, CV: Motion & Tracking, CV: Video Understanding & Activity Analysis

Abstract

Recent transformer-based offline video instance segmentation (VIS) approaches achieve encouraging results and significantly outperform online approaches. However, their reliance on the whole video and the immense computational complexity caused by full Spatio-temporal attention limit them in real-life applications such as processing lengthy videos. In this paper, we propose a single-stage transformer-based efficient online VIS framework named InstanceFormer, which is especially suitable for long and challenging videos. We propose three novel components to model short-term and long-term dependency and temporal coherence. First, we propagate the representation, location, and semantic information of prior instances to model short-term changes. Second, we propose a novel memory cross-attention in the decoder, which allows the network to look into earlier instances within a certain temporal window. Finally, we employ a temporal contrastive loss to impose coherence in the representation of an instance across all frames. Memory attention and temporal coherence are particularly beneficial to long-range dependency modeling, including challenging scenarios like occlusion. The proposed InstanceFormer outperforms previous online benchmark methods by a large margin across multiple datasets. Most importantly, InstanceFormer surpasses offline approaches for challenging and long datasets such as YouTube-VIS-2021 and OVIS. Code is available at https://github.com/rajatkoner08/InstanceFormer.

Downloads

Published

2023-06-26

How to Cite

Koner, R., Hannan, T., Shit, S., Sharifzadeh, S., Schubert, M., Seidl, T., & Tresp, V. (2023). InstanceFormer: An Online Video Instance Segmentation Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 1188-1195. https://doi.org/10.1609/aaai.v37i1.25201

Issue

Section

AAAI Technical Track on Computer Vision I