TY - JOUR AU - Zhang, Yujia AU - Po, Lai-Man AU - Xu, Xuyuan AU - Liu, Mengyang AU - Wang, Yexin AU - Ou, Weifeng AU - Zhao, Yuzhi AU - Yu, Wing-Yin PY - 2022/06/28 Y2 - 2024/03/29 TI - Contrastive Spatio-Temporal Pretext Learning for Self-Supervised Video Representation JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 3 SE - AAAI Technical Track on Computer Vision III DO - 10.1609/aaai.v36i3.20248 UR - https://ojs.aaai.org/index.php/AAAI/article/view/20248 SP - 3380-3389 AB - Spatio-temporal representation learning is critical for video self-supervised representation. Recent approaches mainly use contrastive learning and pretext tasks. However, these approaches learn representation by discriminating sampled instances via feature similarity in the latent space while ignoring the intermediate state of the learned representations, which limits the overall performance. In this work, taking into account the degree of similarity of sampled instances as the intermediate state, we propose a novel pretext task - spatio-temporal overlap rate (STOR) prediction. It stems from the observation that humans are capable of discriminating the overlap rates of videos in space and time. This task encourages the model to discriminate the STOR of two generated samples to learn the representations. Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning. We also study the mutual influence of each component in the proposed scheme. Extensive experiments demonstrate that our proposed STOR task can favor both contrastive learning and pretext tasks and the joint optimization scheme can significantly improve the spatio-temporal representation in video understanding. The code is available at https://github.com/Katou2/CSTP. ER -