Revisiting Classifier: Transferring Vision-Language Models for Video Recognition

Authors

  • Wenhao Wu The University of Sydney, NSW, Australia
  • Zhun Sun Baidu Inc., Beijing, China
  • Wanli Ouyang Shanghai Artificial Intelligence Laboratory, Shanghai, China

DOI:

https://doi.org/10.1609/aaai.v37i3.25386

Keywords:

CV: Video Understanding & Activity Analysis, CV: Applications, CV: Scene Analysis & Understanding, CV: Language and Vision

Abstract

Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source vision-language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for video classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the different knowledge from pre-trained model. We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. Our simple yet effective tuning paradigm achieves state-of-the-art performance and efficient training on various video recognition scenarios, i.e., zero-shot, few-shot, general recognition. In particular, our paradigm achieves the state-of-the-art accuracy of 87.8% on Kinetics-400, and also surpasses previous methods by 20~50% absolute top-1 accuracy under zero-shot, few-shot settings on five video datasets. Code and models are available at https://github.com/whwu95/Text4Vis.

Downloads

Published

2023-06-26

How to Cite

Wu, W., Sun, Z., & Ouyang, W. (2023). Revisiting Classifier: Transferring Vision-Language Models for Video Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 2847-2855. https://doi.org/10.1609/aaai.v37i3.25386

Issue

Section

AAAI Technical Track on Computer Vision III