Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language
DOI:
https://doi.org/10.1609/aaai.v37i2.25267Keywords:
CV: Language and VisionAbstract
Applying large scale pre-trained image-language model to video-language tasks has recently become a trend, which brings two challenges. One is how to effectively transfer knowledge from static images to dynamic videos, and the other is how to deal with the prohibitive cost of fully fine-tuning due to growing model size. Existing works that attempt to realize parameter-efficient image-language to video-language transfer learning can be categorized into two types: 1) appending a sequence of temporal transformer blocks after the 2D Vision Transformer (ViT), and 2) inserting a temporal block into the ViT architecture. While these two types of methods only require fine-tuning the newly added components, there are still many parameters to update, and they are only validated on a single video-language task. In this work, based on our analysis of the core ideas of different temporal modeling components in existing approaches, we propose a token mixing strategy to enable cross-frame interactions, which enables transferring from the pre-trained image-language model to video-language tasks through selecting and mixing a key set and a value set from the input video samples. As token mixing does not require the addition of any components or modules, we can directly partially fine-tune the pre-trained image-language model to achieve parameter-efficiency. We carry out extensive experiments to compare our proposed token mixing method with other parameter-efficient transfer learning methods. Our token mixing method outperforms other methods on both understanding tasks and generation tasks. Besides, our method achieves new records on multiple video-language tasks. The code is available at https://github.com/yuqi657/video_language_model.Downloads
Published
2023-06-26
How to Cite
Liu, Y., Xu, L., Xiong, P., & Jin, Q. (2023). Token Mixing: Parameter-Efficient Transfer Learning from Image-Language to Video-Language. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1781-1789. https://doi.org/10.1609/aaai.v37i2.25267
Issue
Section
AAAI Technical Track on Computer Vision II