SEPT: Towards Scalable and Efficient Visual Pre-training

Authors

  • Yiqi Lin AI Thrust, Information Hub, HKUST (Guangzhou), Guangzhou, China
  • Huabin Zheng SenseTime Research
  • Huaping Zhong SenseTime Research
  • Jinjing Zhu AI Thrust, Information Hub, HKUST (Guangzhou), Guangzhou, China
  • Weijia Li Sun Yat-Sen University
  • Conghui He SenseTime Research
  • Lin Wang AI Thrust, Information Hub, HKUST (Guangzhou), Guangzhou, China Department of Computer Science and Engineering, HKUST, Hong Kong, China

DOI:

https://doi.org/10.1609/aaai.v37i2.25249

Keywords:

CV: Representation Learning for Vision, CV: Applications, CV: Other Foundations of Computer Vision

Abstract

Recently, the self-supervised pre-training paradigm has shown great potential in leveraging large-scale unlabeled data to improve downstream task performance. However, increasing the scale of unlabeled pre-training data in real-world scenarios requires prohibitive computational costs and faces the challenge of uncurated samples. To address these issues, we build a task-specific self-supervised pre-training framework from a data selection perspective based on a simple hypothesis that pre-training on the unlabeled samples with similar distribution to the target task can bring substantial performance gains. Buttressed by the hypothesis, we propose the first yet novel framework for Scalable and Efficient visual Pre-Training (SEPT) by introducing a retrieval pipeline for data selection. SEPT first leverage a self-supervised pre-trained model to extract the features of the entire unlabeled dataset for retrieval pipeline initialization. Then, for a specific target task, SEPT retrievals the most similar samples from the unlabeled dataset based on feature similarity for each target instance for pre-training. Finally, SEPT pre-trains the target model with the selected unlabeled samples in a self-supervised manner for target data finetuning. By decoupling the scale of pre-training and available upstream data for a target task, SEPT achieves high scalability of the upstream dataset and high efficiency of pre-training, resulting in high model architecture flexibility. Results on various downstream tasks demonstrate that SEPT can achieve competitive or even better performance compared with ImageNet pre-training while reducing the size of training samples by one magnitude without resorting to any extra annotations.

Downloads

Published

2023-06-26

How to Cite

Lin, Y., Zheng, H., Zhong, H., Zhu, J., Li, W., He, C., & Wang, L. (2023). SEPT: Towards Scalable and Efficient Visual Pre-training. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1622-1630. https://doi.org/10.1609/aaai.v37i2.25249

Issue

Section

AAAI Technical Track on Computer Vision II