Spatiotemporal Graph Neural Network based Mask Reconstruction for Video Object Segmentation
AbstractThis paper addresses the task of segmenting class-agnostic objects in semi-supervised setting. Although previous detection based methods achieve relatively good performance, these approaches extract the best proposal by a greedy strategy, which may lose the local patch details outside the chosen candidate. In this paper, we propose a novel spatiotemporal graph neural network (STG-Net) to reconstruct more accurate masks for video object segmentation, which captures the local contexts by utilizing all proposals. In the spatial graph, we treat object proposals of a frame as nodes and represent their correlations with an edge weight strategy for mask context aggregation. To capture temporal information from previous frames, we use a memory network to refine the mask of current frame by retrieving historic masks in a temporal graph. The joint use of both local patch details and temporal relationships allow us to better address the challenges such as object occlusions and missing. Without online learning and fine-tuning, our STG-Net achieves state-of-the-art performance on four large benchmarks, demonstrating the effectiveness of the proposed approach.
How to Cite
Liu, D., Xu, S., Liu, X.-Y., Xu, Z., Wei, W., & Zhou, P. (2021). Spatiotemporal Graph Neural Network based Mask Reconstruction for Video Object Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2100-2108. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16307
AAAI Technical Track on Computer Vision II