MEAL: Multi-Model Ensemble via Adversarial Learning

Authors

  • Zhiqiang Shen University of Illinois, Urbana Champaign
  • Zhankui He Fudan University
  • Xiangyang Xue Fudan University

DOI:

https://doi.org/10.1609/aaai.v33i01.33014886

Abstract

Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%.

Downloads

Published

2019-07-17

How to Cite

Shen, Z., He, Z., & Xue, X. (2019). MEAL: Multi-Model Ensemble via Adversarial Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4886-4893. https://doi.org/10.1609/aaai.v33i01.33014886

Issue

Section

AAAI Technical Track: Machine Learning