Siamese Network with Interactive Transformer for Video Object Segmentation
DOI:
https://doi.org/10.1609/aaai.v36i2.20009Keywords:
Computer Vision (CV)Abstract
Semi-supervised video object segmentation (VOS) refers to segmenting the target object in remaining frames given its annotation in the first frame, which has been actively studied in recent years. The key challenge lies in finding effective ways to exploit the spatio-temporal context of past frames to help learn discriminative target representation of current frame. In this paper, we propose a novel Siamese network with a specifically designed interactive transformer, called SITVOS, to enable effective context propagation from historical to current frames. Technically, we use the transformer encoder and decoder to handle the past frames and current frame separately, i.e., the encoder encodes robust spatio-temporal context of target object from the past frames, while the decoder takes the feature embedding of current frame as the query to retrieve the target from the encoder output. To further enhance the target representation, a feature interaction module (FIM) is devised to promote the information flow between the encoder and decoder. Moreover, we employ the Siamese architecture to extract backbone features of both past and current frames, which enables feature reuse and is more efficient than existing methods. Experimental results on three challenging benchmarks validate the superiority of SITVOS over state-of-the-art methods. Code is available at https://github.com/LANMNG/SITVOS.Downloads
Published
2022-06-28
How to Cite
Lan, M., Zhang, J., He, F., & Zhang, L. (2022). Siamese Network with Interactive Transformer for Video Object Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1228-1236. https://doi.org/10.1609/aaai.v36i2.20009
Issue
Section
AAAI Technical Track on Computer Vision II