Scene-Centric Joint Parsing of Cross-View Videos

Authors

  • Hang Qi University of California, Los Angeles
  • Yuanlu Xu University of California, Los Angeles
  • Tao Yuan University of California, Los Angeles
  • Tianfu Wu NC State University
  • Song-Chun Zhu University of California, Los Angeles

DOI:

https://doi.org/10.1609/aaai.v32i1.12256

Keywords:

Joint Parsing, Scene-centric Representation, Knowledge Fusion, Spatio-temporal Semantic Parse Graph, Ontology Graph

Abstract

Cross-view video understanding is an important yet under-explored area in computer vision. In this paper, we introduce a joint parsing framework that integrates view-centric proposals into scene-centric parse graphs that represent a coherent scene-centric understanding of cross-view scenes. Our key observations are that overlapping fields of views embed rich appearance and geometry correlations and that knowledge fragments corresponding to individual vision tasks are governed by consistency constraints available in commonsense knowledge. The proposed joint parsing framework represents such correlations and constraints explicitly and generates semantic scene-centric parse graphs. Quantitative experiments show that scene-centric predictions in the parse graph outperform view-centric predictions.

Downloads

Published

2018-04-27

How to Cite

Qi, H., Xu, Y., Yuan, T., Wu, T., & Zhu, S.-C. (2018). Scene-Centric Joint Parsing of Cross-View Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12256