Video Recovery via Learning Variation and Consistency of Images

Authors

  • Zhouyuan Huo University of Texas at Arlington
  • Shangqian Gao Northeastern University
  • Weidong Cai University of Sydney
  • Heng Huang University of Texas at Arlington

DOI:

https://doi.org/10.1609/aaai.v31i1.11241

Keywords:

Video recovery, matrix completion, low-rank model

Abstract

Matrix completion algorithms have been popularly used to recover images with missing entries, and they are proved to be very effective. Recent works utilized tensor completion models in video recovery assuming that all video frames are homogeneous and correlated. However, real videos are made up of different episodes or scenes, i.e. heterogeneous. Therefore, a video recovery model which utilizes both video spatiotemporal consistency and variation is necessary. To solve this problem, we propose a new video recovery method Sectional Trace Norm with Variation and Consistency Constraints (STN-VCC). In our model, capped L1-norm regularization is utilized to learn the spatial-temporal consistency and variation between consecutive frames in video clips. Meanwhile, we introduce a new low-rank model to capture the low-rank structure in video frames with a better approximation of rank minimization than traditional trace norm. An efficient optimization algorithm is proposed, and we also provide a proof of convergence in the paper. We evaluate the proposed method via several video recovery tasks and experiment results show that our new method consistently outperforms other related approaches.

Downloads

Published

2017-02-12

How to Cite

Huo, Z., Gao, S., Cai, W., & Huang, H. (2017). Video Recovery via Learning Variation and Consistency of Images. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11241