TY - JOUR AU - Xu, Kunran AU - Li, Yishi AU - Zhang, Huawei AU - Lai, Rui AU - Gu, Lin PY - 2022/06/28 Y2 - 2024/03/28 TI - EtinyNet: Extremely Tiny Network for TinyML JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 4 SE - AAAI Technical Track on Domain(s) Of Application DO - 10.1609/aaai.v36i4.20387 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20387 SP - 4628-4636 AB - There are many AI applications in high-income countries because their implementation depends on expensive GPU cards (~2000$) and reliable power supply (~200W). To deploy AI in resource-poor settings on cheaper (~20$) and low-power devices (<1W), key modifications are required to adapt neural networks for Tiny machine learning (TinyML). In this paper, for putting CNNs into storage limited devices, we developed efficient tiny models with only hundreds of KB parameters. Toward this end, we firstly design a parameter-efficient tiny architecture by introducing dense linear depthwise block. Then, a novel adaptive scale quantization (ASQ) method is proposed for further quantizing tiny models in aggressive low-bit while retaining the accuracy. With the optimized architecture and 4-bit ASQ, we present a family of ultralightweight networks, named EtinyNet, that achieves 57.0% ImageNet top-1 accuracy with an extremely tiny model size of 340KB. When deployed on an off-the-shelf commercial microcontroller for object detection tasks, EtinyNet achieves state-of-the-art 56.4% mAP on Pascal VOC. Furthermore, the experimental results on Xilinx compact FPGA indicate that EtinyNet achieves prominent low power of 620mW, about 5.6x lower than existing FPGA designs. The code and demo are in https://github.com/aztc/EtinyNet ER -