TY - JOUR AU - Liu, Yu AU - Huang, Lianghua AU - Pan, Pan AU - Wang, Bin AU - Xu, Yinghui AU - Jin, Rong PY - 2021/05/18 Y2 - 2024/03/29 TI - Train a One-Million-Way Instance Classifier for Unsupervised Visual Representation Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 10 SE - AAAI Technical Track on Machine Learning III DO - 10.1609/aaai.v35i10.17055 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17055 SP - 8706-8714 AB - This paper presents a simple unsupervised visual representation learning method with a pretext task of discriminating all images in a dataset using a parametric, instance-level classifier. The overall framework is a replica of a supervised classification model, where semantic classes (e.g., dog, bird, and ship) are replaced by instance IDs. However, scaling up the classification task from thousands of semantic labels to millions of instance labels brings specific challenges including 1) the large-scale softmax computation; 2) the slow convergence due to the infrequent visiting of instance samples; and 3) the massive number of negative classes that can be noisy. This work presents several novel techniques to handle these difficulties. First, we introduce a hybrid parallel training framework to make large-scale training feasible. Second, we present a raw-feature initialization mechanism for classification weights, which we assume offers a contrastive prior for instance discrimination and can clearly speed up converge in our experiments. Finally, we propose to smooth the labels of a few hardest classes to avoid optimizing over very similar negative pairs. While being conceptually simple, our framework achieves competitive or superior performance compared to state-of-the-art unsupervised approaches, i.e., SimCLR, MoCoV2, and PIC under ImageNet linear evaluation protocol and on several downstream visual tasks, verifying that full instance classification is a strong pretraining technique for many semantic visual tasks. ER -