SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning


  • Ting Yao JD AI Research
  • Yiheng Zhang JD AI Research
  • Zhaofan Qiu JD AI Research
  • Yingwei Pan JD AI Research
  • Tao Mei JD AI Research



Unsupervised & Self-Supervised Learning


A steady momentum of innovations and breakthroughs has convincingly pushed the limits of unsupervised image representation learning. Compared to static 2D images, video has one more dimension (time). The inherent supervision existing in such sequential structure offers a fertile ground for building unsupervised learning models. In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives. We materialize the supervisory signals through determining whether a pair of samples is from one frame or from one video, and whether a triplet of samples is in the correct temporal order. We uniquely regard the signals as the foundation in contrastive learning and derive a particular form named Sequence Contrastive Learning (SeCo). SeCo shows superior results under the linear protocol on action recognition (Kinetics), untrimmed activity recognition (ActivityNet) and object tracking (OTB-100). More remarkably, SeCo demonstrates considerable improvements over recent unsupervised pre-training techniques, and leads the accuracy by 2.96% and 6.47% against fully-supervised ImageNet pre-training in action recognition task on UCF101 and HMDB51, respectively. Source code is available at




How to Cite

Yao, T., Zhang, Y., Qiu, Z., Pan, Y., & Mei, T. (2021). SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10656-10664.



AAAI Technical Track on Machine Learning V