MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling
DOI:
https://doi.org/10.1609/aaai.v38i6.28448Keywords:
CV: Multi-modal Vision, CV: Applications, CV: Image and Video Retrieval, CV: Language and Vision, CV: Video Understanding & Activity AnalysisAbstract
Video-and-language understanding has a variety of applications in the industry, such as video question answering, text-video retrieval, and multi-label classification. Existing video-and-language understanding methods generally adopt heavy multi-modal encoders and feature fusion modules, which consume high computational costs. Specially, they have difficulty dealing with dense video frames or long text prevalent in industrial applications. This paper proposes MuLTI, a highly accurate and efficient video-and-language understanding model that achieves efficient and effective feature fusion and rapid adaptation to downstream tasks. Specifically, we design a Text-Guided MultiWay-Sampler based on adapt-pooling residual mapping and self-attention modules to sample long sequences and fuse multi-modal features, which reduces the computational costs and addresses performance degradation caused by previous samplers. Therefore, MuLTI can handle longer sequences with limited computational costs. Then, to further enhance the model's performance and fill in the lack of pretraining tasks in the video question answering, we propose a new pretraining task named Multiple Choice Modeling. This task bridges the gap between pretraining and downstream tasks and improves the model's ability to align video and text features. Benefiting from the efficient feature fusion module and the new pretraining task, MuLTI achieves state-of-the-art performance on multiple datasets. Implementation and pretrained models will be released.Downloads
Published
2024-03-24
How to Cite
Xu, J., Liu, B., Chen, Y., Cheng, M., & Shi, X. (2024). MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 6297-6305. https://doi.org/10.1609/aaai.v38i6.28448
Issue
Section
AAAI Technical Track on Computer Vision V