EtinyNet: Extremely Tiny Network for TinyML

Authors

  • Kunran Xu Xidian University
  • Yishi Li Xidian Univercity
  • Huawei Zhang Xidian University
  • Rui Lai Xidian University
  • Lin Gu RIKEN, AIP The University of Tokyo

DOI:

https://doi.org/10.1609/aaai.v36i4.20387

Keywords:

Domain(s) Of Application (APP)

Abstract

There are many AI applications in high-income countries because their implementation depends on expensive GPU cards (~2000$) and reliable power supply (~200W). To deploy AI in resource-poor settings on cheaper (~20$) and low-power devices (<1W), key modifications are required to adapt neural networks for Tiny machine learning (TinyML). In this paper, for putting CNNs into storage limited devices, we developed efficient tiny models with only hundreds of KB parameters. Toward this end, we firstly design a parameter-efficient tiny architecture by introducing dense linear depthwise block. Then, a novel adaptive scale quantization (ASQ) method is proposed for further quantizing tiny models in aggressive low-bit while retaining the accuracy. With the optimized architecture and 4-bit ASQ, we present a family of ultralightweight networks, named EtinyNet, that achieves 57.0% ImageNet top-1 accuracy with an extremely tiny model size of 340KB. When deployed on an off-the-shelf commercial microcontroller for object detection tasks, EtinyNet achieves state-of-the-art 56.4% mAP on Pascal VOC. Furthermore, the experimental results on Xilinx compact FPGA indicate that EtinyNet achieves prominent low power of 620mW, about 5.6x lower than existing FPGA designs. The code and demo are in https://github.com/aztc/EtinyNet

Downloads

Published

2022-06-28

How to Cite

Xu, K., Li, Y., Zhang, H., Lai, R., & Gu, L. (2022). EtinyNet: Extremely Tiny Network for TinyML. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 4628-4636. https://doi.org/10.1609/aaai.v36i4.20387

Issue

Section

AAAI Technical Track on Domain(s) Of Application