XKD: Cross-Modal Knowledge Distillation with Domain Alignment for Video Representation Learning

Authors

  • Pritam Sarkar Queen's Univesity Vector Institute
  • Ali Etemad Queen's University

DOI:

https://doi.org/10.1609/aaai.v38i13.29407

Keywords:

ML: Unsupervised & Self-Supervised Learning, CV: Video Understanding & Activity Analysis

Abstract

We present XKD, a novel self-supervised framework to learn meaningful representations from unlabelled videos. XKD is trained with two pseudo objectives. First, masked data reconstruction is performed to learn modality-specific representations from audio and visual streams. Next, self-supervised cross-modal knowledge distillation is performed between the two modalities through a teacher-student setup to learn complementary information. We introduce a novel domain alignment strategy to tackle domain discrepancy between audio and visual modalities enabling effective cross-modal knowledge distillation. Additionally, to develop a general-purpose network capable of handling both audio and visual streams, modality-agnostic variants of XKD are introduced, which use the same pretrained backbone for different audio and visual tasks. Our proposed cross-modal knowledge distillation improves video action classification by 8% to 14% on UCF101, HMDB51, and Kinetics400. Additionally, XKD improves multimodal action classification by 5.5% on Kinetics-Sound. XKD shows state-of-the-art performance in sound classification on ESC50, achieving top-1 accuracy of 96.5%.

Published

2024-03-24

How to Cite

Sarkar, P., & Etemad, A. (2024). XKD: Cross-Modal Knowledge Distillation with Domain Alignment for Video Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14875-14885. https://doi.org/10.1609/aaai.v38i13.29407

Issue

Section

AAAI Technical Track on Machine Learning IV