Non-Autoregressive Coarse-to-Fine Video Captioning

Authors

  • Bang Yang Peking University
  • Yuexian Zou Peking University Peng Cheng Laboratory
  • Fenglin Liu Peking University
  • Can Zhang Peking University

DOI:

https://doi.org/10.1609/aaai.v35i4.16421

Keywords:

Language and Vision, Language Grounding & Multi-modal NLP, Video Understanding & Activity Analysis, Generation

Abstract

It is encouraged to see that progress has been made to bridge videos and natural language. However, mainstream video captioning methods suffer from slow inference speed due to the sequential manner of autoregressive decoding, and prefer generating generic descriptions due to the insufficient training of visual words (e.g., nouns and verbs) and inadequate decoding paradigm. In this paper, we propose a non-autoregressive decoding based model with a coarse-to-fine captioning procedure to alleviate these defects. In implementations, we employ a bi-directional self-attention based network as our language model for achieving inference speedup, based on which we decompose the captioning procedure into two stages, where the model has different focuses. Specifically, given that visual words determine the semantic correctness of captions, we design a mechanism of generating visual words to not only promote the training of scene-related words but also capture relevant details from videos to construct a coarse-grained sentence ``template''. Thereafter, we devise dedicated decoding algorithms that fill in the ``template'' with suitable words and modify inappropriate phrasing via iterative refinement to obtain a fine-grained description. Extensive experiments on two mainstream video captioning benchmarks, i.e., MSVD and MSR-VTT, demonstrate that our approach achieves state-of-the-art performance, generates diverse descriptions, and obtains high inference efficiency.

Downloads

Published

2021-05-18

How to Cite

Yang, B., Zou, Y., Liu, F., & Zhang, C. (2021). Non-Autoregressive Coarse-to-Fine Video Captioning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3119-3127. https://doi.org/10.1609/aaai.v35i4.16421

Issue

Section

AAAI Technical Track on Computer Vision III