Shoot to Know What: An Application of Deep Networks on Mobile Devices

Authors

  • Jiaxiang Wu Institute of Automation, Chinese Academy of Sciences
  • Qinghao Hu Institute of Automation, Chinese Academy of Sciences
  • Cong Leng Institute of Automation, Chinese Academy of Sciences
  • Jian Cheng Institute of Automation, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v30i1.9831

Keywords:

Convolutional Neural Network, Quantization, Mobile Devices

Abstract

Convolutional neural networks (CNNs) have achieved impressive performance in a wide range of computer vision areas. However, the application on mobile devices remains intractable due to the high computation complexity. In this demo, we propose the Quantized CNN (Q-CNN), an efficient framework for CNN models, to fulfill efficient and accurate image classification on mobile devices. Our Q-CNN framework dramatically accelerates the computation and reduces the storage/memory consumption, so that mobile devices can independently run an ImageNet-scale CNN model. Experiments on the ILSVRC-12 dataset demonstrate 4~6x speed-up and 15~20x compression, with merely one percentage drop in the classification accuracy. Based on the Q-CNN framework, even mobile devices can accurately classify images within one second.

Downloads

Published

2016-03-05

How to Cite

Wu, J., Hu, Q., Leng, C., & Cheng, J. (2016). Shoot to Know What: An Application of Deep Networks on Mobile Devices. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9831