Any-Precision Deep Neural Networks

Authors

  • Haichao Yu UIUC
  • Haoxiang Li Wormpex AI Research
  • Humphrey Shi University of Oregon UIUC
  • Thomas S. Huang UIUC
  • Gang Hua Wormpex AI Research

Keywords:

(Deep) Neural Network Algorithms, Learning on the Edge & Model Compression

Abstract

We present any-precision deep neural networks (DNNs), which are trained with a new method that allows the learned DNNs to be flexible in numerical precision during inference. The same model in runtime can be flexibly and directly set to different bit-widths, by truncating the least significant bits, to support dynamic speed and accuracy trade-off. When all layers are set to low-bits, we show that the model achieved accuracy comparable to dedicated models trained at the same precision. This nice property facilitates flexible deployment of deep learning models in real-world applications, where in practice trade-offs between model accuracy and runtime efficiency are often sought. Previous literature presents solutions to train models at each individual fixed efficiency/accuracy trade-off point. But how to produce a model flexible in runtime precision is largely unexplored. When the demand of efficiency/accuracy trade-off varies from time to time or even dynamically changes in runtime, it is infeasible to re-train models accordingly, and the storage budget may forbid keeping multiple models. Our proposed framework achieves this flexibility without performance degradation. More importantly, we demonstrate that this achievement is agnostic to model architectures and applicable to multiple vision tasks. Our code is released at https://github.com/SHI-Labs/Any-Precision-DNNs.

Downloads

Published

2021-05-18

How to Cite

Yu, H., Li, H., Shi, H., Huang, T. S., & Hua, G. (2021). Any-Precision Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10763-10771. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17286

Issue

Section

AAAI Technical Track on Machine Learning V