Network Recasting: A Universal Method for Network Architecture Transformation


  • Joonsang Yu Seoul National University
  • Sungbum Kang Seoul National University
  • Kiyoung Choi Seoul National University



This paper proposes network recasting as a general method for network architecture transformation. The primary goal of this method is to accelerate the inference process through the transformation, but there can be many other practical applications. The method is based on block-wise recasting; it recasts each source block in a pre-trained teacher network to a target block in a student network. For the recasting, a target block is trained such that its output activation approximates that of the source block. Such a block-by-block recasting in a sequential manner transforms the network architecture while preserving the accuracy. This method can be used to transform an arbitrary teacher network type to an arbitrary student network type. It can even generate a mixed-architecture network that consists of two or more types of block. The network recasting can generate a network with fewer parameters and/or activations, which reduce the inference time significantly. Naturally, it can be used for network compression by recasting a trained network into a smaller network of the same type. Our experiments show that it outperforms previous compression approaches in terms of actual speedup on a GPU.




How to Cite

Yu, J., Kang, S., & Choi, K. (2019). Network Recasting: A Universal Method for Network Architecture Transformation. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5701-5708.



AAAI Technical Track: Machine Learning