Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data

Authors

  • Kyoung-Woon On Seoul National University
  • Eun-Sol Kim Kakao Brain
  • Yu-Jung Heo Seoul National University
  • Byoung-Tak Zhang Seoul National University

DOI:

https://doi.org/10.1609/aaai.v34i04.5978

Abstract

Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset (Abu-El-Haija et al. 2016)) and Video Question and Answering (TVQA dataset(Lei et al. 2018)). The experimental results show that our model efficiently learns the semantic compositional structure of video data. Furthermore, our model achieves the highest performance in comparison to other baseline methods.

Downloads

Published

2020-04-03

How to Cite

On, K.-W., Kim, E.-S., Heo, Y.-J., & Zhang, B.-T. (2020). Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5315-5322. https://doi.org/10.1609/aaai.v34i04.5978

Issue

Section

AAAI Technical Track: Machine Learning