Hybrid Instance-Aware Temporal Fusion for Online Video Instance Segmentation
Keywords:Computer Vision (CV)
AbstractRecently, transformer-based image segmentation methods have achieved notable success against previous solutions. While for video domains, how to effectively model temporal context with the attention of object instances across frames remains an open problem. In this paper, we propose an online video instance segmentation framework with a novel instance-aware temporal fusion method. We first leverage the representation, \ie, a latent code in the global context (instance code) and CNN feature maps to represent instance- and pixel-level features. Based on this representation, we introduce a cropping-free temporal fusion approach to model the temporal consistency between video frames. Specifically, we encode global instance-specific information in the instance code and build up inter-frame contextual fusion with hybrid attentions between the instance codes and CNN feature maps. Inter-frame consistency between the instance codes is further enforced with order constraints. By leveraging the learned hybrid temporal consistency, we are able to directly retrieve and maintain instance identities across frames, eliminating the complicated frame-wise instance matching in prior methods. Extensive experiments have been conducted on popular VIS datasets, i.e. Youtube-VIS-19/21. Our model achieves the best performance among all online VIS methods. Notably, our model also eclipses all offline methods when using the ResNet-50 backbone.
How to Cite
Li, X., Wang, J., Li, X., & Lu, Y. (2022). Hybrid Instance-Aware Temporal Fusion for Online Video Instance Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1429-1437. https://doi.org/10.1609/aaai.v36i2.20032
AAAI Technical Track on Computer Vision II