Spatiotemporal Graph Neural Network based Mask Reconstruction for Video Object Segmentation

Authors

  • Daizong Liu Huazhong University of Science and Technology
  • Shuangjie Xu Deeproute.ai
  • Xiao-Yang Liu Columbia University
  • Zichuan Xu Dalian University of Technology
  • Wei Wei Huazhong University of Science and Technology
  • Pan Zhou Huazhong University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v35i3.16307

Keywords:

Segmentation

Abstract

This paper addresses the task of segmenting class-agnostic objects in semi-supervised setting. Although previous detection based methods achieve relatively good performance, these approaches extract the best proposal by a greedy strategy, which may lose the local patch details outside the chosen candidate. In this paper, we propose a novel spatiotemporal graph neural network (STG-Net) to reconstruct more accurate masks for video object segmentation, which captures the local contexts by utilizing all proposals. In the spatial graph, we treat object proposals of a frame as nodes and represent their correlations with an edge weight strategy for mask context aggregation. To capture temporal information from previous frames, we use a memory network to refine the mask of current frame by retrieving historic masks in a temporal graph. The joint use of both local patch details and temporal relationships allow us to better address the challenges such as object occlusions and missing. Without online learning and fine-tuning, our STG-Net achieves state-of-the-art performance on four large benchmarks, demonstrating the effectiveness of the proposed approach.

Downloads

Published

2021-05-18

How to Cite

Liu, D., Xu, S., Liu, X.-Y., Xu, Z., Wei, W., & Zhou, P. (2021). Spatiotemporal Graph Neural Network based Mask Reconstruction for Video Object Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2100-2108. https://doi.org/10.1609/aaai.v35i3.16307

Issue

Section

AAAI Technical Track on Computer Vision II