Learning Light Field Angular Super-Resolution via a Geometry-Aware Network

Authors

  • Jing Jin City University of Hong Kong
  • Junhui Hou City University of Hong Kong
  • Hui Yuan Shandong University
  • Sam Kwong City University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v34i07.6771

Abstract

The acquisition of light field images with high angular resolution is costly. Although many methods have been proposed to improve the angular resolution of a sparsely-sampled light field, they always focus on the light field with a small baseline, which is captured by a consumer light field camera. By making full use of the intrinsic geometry information of light fields, in this paper we propose an end-to-end learning-based approach aiming at angularly super-resolving a sparsely-sampled light field with a large baseline. Our model consists of two learnable modules and a physically-based module. Specifically, it includes a depth estimation module for explicitly modeling the scene geometry, a physically-based warping for novel views synthesis, and a light field blending module specifically designed for light field reconstruction. Moreover, we introduce a novel loss function to promote the preservation of the light field parallax structure. Experimental results over various light field datasets including large baseline light field images demonstrate the significant superiority of our method when compared with state-of-the-art ones, i.e., our method improves the PSNR of the second best method up to 2 dB in average, while saves the execution time 48×. In addition, our method preserves the light field parallax structure better.

Downloads

Published

2020-04-03

How to Cite

Jin, J., Hou, J., Yuan, H., & Kwong, S. (2020). Learning Light Field Angular Super-Resolution via a Geometry-Aware Network. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11141-11148. https://doi.org/10.1609/aaai.v34i07.6771

Issue

Section

AAAI Technical Track: Vision