Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks
DOI:
https://doi.org/10.1609/aaai.v32i1.12182Keywords:
deep learning, knowledge, distillation, compression, GANAbstract
With neural networks rapidly becoming deeper, there emerges a need for compact models. One popular approach for this is to train small student networks to mimic larger and deeper teacher models, rather than directly learn from the training data. We propose a novel technique to train student-teacher networks without directly providing label information to the student. However, our main contribution is to learn how to learn from the teacher by a unique strategy---having the student compete with a discriminator.
Downloads
Published
2018-04-29
How to Cite
Prabhu, A., Krishna, H., & Saha, S. (2018). Adversary Is the Best Teacher: Towards Extremely Compact Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12182
Issue
Section
Student Abstract Track