Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks

Authors

  • Yoonho Boo Seoul National University
  • Sungho Shin Seoul National University
  • Jungwook Choi Hanyang University
  • Wonyong Sung Seoul national university

DOI:

https://doi.org/10.1609/aaai.v35i8.16839

Keywords:

Learning on the Edge & Model Compression

Abstract

The quantization of deep neural networks (QDNNs) has been actively studied for deployment in edge devices. Recent studies employ the knowledge distillation (KD) method to improve the performance of quantized networks. In this study, we propose stochastic precision ensemble training for QDNNs (SPEQ). SPEQ is a knowledge distillation training scheme; however, the teacher is formed by sharing the model parameters of the student network. We obtain the soft labels of the teacher by randomly changing the bit precision of the activation stochastically at each layer of the forward-pass computation. The student model is trained with these soft labels to reduce the activation quantization noise. The cosine similarity loss is employed, instead of the KL-divergence, for KD training. As the teacher model changes continuously by random bit-precision assignment, it exploits the effect of stochastic ensemble KD. SPEQ outperforms the existing quantization training methods in various tasks, such as image classification, question-answering, and transfer learning without the need for cumbersome teacher networks.

Downloads

Published

2021-05-18

How to Cite

Boo, Y., Shin, S., Choi, J., & Sung, W. (2021). Stochastic Precision Ensemble: Self-Knowledge Distillation for Quantized Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6794-6802. https://doi.org/10.1609/aaai.v35i8.16839

Issue

Section

AAAI Technical Track on Machine Learning I