Self-Supervised Video Representation Learning via Latent Time Navigation
DOI:
https://doi.org/10.1609/aaai.v37i3.25416Keywords:
CV: Video Understanding & Activity AnalysisAbstract
Self-supervised video representation learning aimed at maximizing similarity between different temporal segments of one video, in order to enforce feature persistence over time. This leads to loss of pertinent information related to temporal relationships, rendering actions such as `enter' and `leave' to be indistinguishable. To mitigate this limitation, we propose Latent Time Navigation (LTN), a time parameterized contrastive learning strategy that is streamlined to capture fine-grained motions. Specifically, we maximize the representation similarity between different video segments from one video, while maintaining their representations time-aware along a subspace of the latent representation code including an orthogonal basis to represent temporal changes. Our extensive experimental analysis suggests that learning video representations by LTN consistently improves performance of action classification in fine-grained and human-oriented tasks (e.g., on Toyota Smarthome dataset). In addition, we demonstrate that our proposed model, when pre-trained on Kinetics-400, generalizes well onto the unseen real world video benchmark datasets UCF101 and HMDB51, achieving state-of-the-art performance in action recognition.Downloads
Published
2023-06-26
How to Cite
Yang, D., Wang, Y., Kong, Q., Dantcheva, A., Garattoni, L., Francesca, G., & Brémond, F. (2023). Self-Supervised Video Representation Learning via Latent Time Navigation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3118-3126. https://doi.org/10.1609/aaai.v37i3.25416
Issue
Section
AAAI Technical Track on Computer Vision III