Motion-blurred Video Interpolation and Extrapolation

Authors

  • Dawit Mureja Argaw KAIST Robotics and Computer Vision Lab., Daejeon, Korea
  • Junsik Kim KAIST Robotics and Computer Vision Lab., Daejeon, Korea
  • Francois Rameau KAIST Robotics and Computer Vision Lab., Daejeon, Korea
  • In So Kweon KAIST Robotics and Computer Vision Lab., Daejeon, Korea

DOI:

https://doi.org/10.1609/aaai.v35i2.16173

Keywords:

Computational Photography, Image & Video Synthesis

Abstract

Abrupt motion of camera or objects in a scene result in a blurry video, and therefore recovering high quality video requires two types of enhancements: visual enhancement and temporal upsampling. A broad range of research attempted to recover clean frames from blurred image sequences or temporally upsample frames by interpolation, yet there are very limited studies handling both problems jointly. In this work, we present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner. We design our framework by first learning the pixel-level motion that caused the blur from the given inputs via optical flow estimation and then predict multiple clean frames by warping the decoded features with the estimated flows. To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule. The effectiveness and favorability of our approach are highlighted through extensive qualitative and quantitative evaluations on motion-blurred datasets from high speed videos.

Downloads

Published

2021-05-18

How to Cite

Argaw, D. M., Kim, J., Rameau, F., & Kweon, I. S. (2021). Motion-blurred Video Interpolation and Extrapolation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 901-910. https://doi.org/10.1609/aaai.v35i2.16173

Issue

Section

AAAI Technical Track on Computer Vision I