Detail-Preserving Transformer for Light Field Image Super-resolution

Authors

  • Shunzhou Wang Beijing Institute of Technology
  • Tianfei Zhou ETH Zurich
  • Yao Lu Beijing Institute of Technology
  • Huijun Di Beijing Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v36i3.20153

Keywords:

Computer Vision (CV)

Abstract

Recently, numerous algorithms have been developed to tackle the problem of light field super-resolution (LFSR), i.e., super-resolving low-resolution light fields to gain high-resolution views. Despite delivering encouraging results, these approaches are all convolution-based, and are naturally weak in global relation modeling of sub-aperture images necessarily to characterize the inherent structure of light fields. In this paper, we put forth a novel formulation built upon Transformers, by treating LFSR as a sequence-to-sequence reconstruction task. In particular, our model regards sub-aperture images of each vertical or horizontal angular view as a sequence, and establishes long-range geometric dependencies within each sequence via a spatial-angular locally-enhanced self-attention layer, which maintains the locality of each sub-aperture image as well. Additionally, to better recover image details, we propose a detail-preserving Transformer (termed as DPT), by leveraging gradient maps of light field to guide the sequence learning. DPT consists of two branches, with each associated with a Transformer for learning from an original or gradient image sequence. The two branches are finally fused to obtain comprehensive feature representations for reconstruction. Evaluations are conducted on a number of light field datasets, including real-world scenes and synthetic data. The proposed method achieves superior performance comparing with other state-of-the-art schemes. Our code is publicly available at: https://github.com/BITszwang/DPT.

Downloads

Published

2022-06-28

How to Cite

Wang, S., Zhou, T., Lu, Y., & Di, H. (2022). Detail-Preserving Transformer for Light Field Image Super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2522-2530. https://doi.org/10.1609/aaai.v36i3.20153

Issue

Section

AAAI Technical Track on Computer Vision III