Boosting Residual Networks with Group Knowledge
DOI:
https://doi.org/10.1609/aaai.v38i6.28322Keywords:
CV: Learning & Optimization for CV, CV: Other Foundations of Computer Vision, CV: Representation Learning for Vision, CV: SegmentationAbstract
Recent research understands the residual networks from a new perspective of the implicit ensemble model. From this view, previous methods such as stochastic depth and stimulative training have further improved the performance of the residual network by sampling and training of its subnets. However, they both use the same supervision for all subnets of different capacities and neglect the valuable knowledge generated by subnets during training. In this manuscript, we mitigate the significant knowledge distillation gap caused by using the same kind of supervision and advocate leveraging the subnets to provide diverse knowledge. Based on this motivation, we propose a group knowledge based training framework for boosting the performance of residual networks. Specifically, we implicitly divide all subnets into hierarchical groups by subnet-in-subnet sampling, aggregate the knowledge of different subnets in each group during training, and exploit upper-level group knowledge to supervise lower-level subnet group. Meanwhile, we also develop a subnet sampling strategy that naturally samples larger subnets, which are found to be more helpful than smaller subnets in boosting performance for hierarchical groups. Compared with typical subnet training and other methods, our method achieves the best efficiency and performance trade-offs on multiple datasets and network structures. The code is at https://github.com/tsj-001/AAAI24-GKT.Downloads
Published
2024-03-24
How to Cite
Tang, S., Ye, P., Li, B., Lin, W., Chen, T., He, T., Yu, C., & Ouyang, W. (2024). Boosting Residual Networks with Group Knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5162-5170. https://doi.org/10.1609/aaai.v38i6.28322
Issue
Section
AAAI Technical Track on Computer Vision V