Training Binary Neural Network without Batch Normalization for Image Super-Resolution

Authors

  • Xinrui Jiang State Key Laboratory of Integrated Services Networks School of Telecommunications Engineering, Xidian University
  • Nannan Wang State Key Laboratory of Integrated Services Networks School of Telecommunications Engineering, Xidian University
  • Jingwei Xin State Key Laboratory of Integrated Services Networks School of Electronic Engineering, Xidian University
  • Keyu Li State Key Laboratory of Integrated Services Networks School of Telecommunications Engineering, Xidian University
  • Xi Yang State Key Laboratory of Integrated Services Networks School of Telecommunications Engineering, Xidian University
  • Xinbo Gao Chongqing Key Laboratory of Image Cognition Chongqing University of Posts and Telecommunications

DOI:

https://doi.org/10.1609/aaai.v35i2.16263

Keywords:

Low Level & Physics-based Vision

Abstract

Recently, binary neural network (BNN) based super-resolution (SR) methods have enjoyed initial success in the SR field. However, there is a noticeable performance gap between the binarized model and the full-precision one. Furthermore, the batch normalization (BN) in binary SR networks introduces floating-point calculations, which is unfriendly to low-precision hardwares. Therefore, there is still room for improvement in terms of model performance and efficiency. Focusing on this issue, in this paper, we first explore a novel binary training mechanism based on the feature distribution, allowing us to replace all BN layers with a simple training method. Then, we construct a strong baseline by combining the highlights of recent binarization methods, which already surpasses the state-of-the-arts. Next, to train highly accurate binarized SR model, we also develop a lightweight network architecture and a multi-stage knowledge distillation strategy to enhance the model representation ability. Extensive experiments demonstrate that the proposed method not only presents advantages of lower computation as compared to conventional floating-point networks but outperforms the state-of-the-art binary methods on the standard SR networks.

Downloads

Published

2021-05-18

How to Cite

Jiang, X., Wang, N., Xin, J., Li, K., Yang, X., & Gao, X. (2021). Training Binary Neural Network without Batch Normalization for Image Super-Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1700-1707. https://doi.org/10.1609/aaai.v35i2.16263

Issue

Section

AAAI Technical Track on Computer Vision I