Boosted Dynamic Neural Networks

Authors

  • Haichao Yu University of Illinois Urbana-Champaign
  • Haoxiang Li Wormpex AI Research
  • Gang Hua Wormpex AI Research
  • Gao Huang Tsinghua University
  • Humphrey Shi University of Illinois Urbana-Champaign University of Oregon

DOI:

https://doi.org/10.1609/aaai.v37i9.26302

Keywords:

ML: Deep Neural Architectures, ML: Learning on the Edge & Model Compression

Abstract

Early-exiting dynamic neural networks (EDNN), as one type of dynamic neural networks, has been widely studied recently. A typical EDNN has multiple prediction heads at different layers of the network backbone. During inference, the model will exit at either the last prediction head or an intermediate prediction head where the prediction confidence is higher than a predefined threshold. To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data. This brings a train-test mismatch problem that all the prediction heads are optimized on all types of data in training phase while the deeper heads will only see difficult inputs in testing phase. Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions. To mitigate this problem, we formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively. We name our method BoostNet. Our experiments show it achieves the state-of-the-art performance on CIFAR100 and ImageNet datasets in both anytime and budgeted-batch prediction modes. Our code is released at https://github.com/SHI-Labs/Boosted-Dynamic-Networks.

Downloads

Published

2023-06-26

How to Cite

Yu, H., Li, H., Hua, G., Huang, G., & Shi, H. (2023). Boosted Dynamic Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 10989-10997. https://doi.org/10.1609/aaai.v37i9.26302

Issue

Section

AAAI Technical Track on Machine Learning IV