Cycle-Consistency Learning for Captioning and Grounding

Authors

  • Ning Wang Huawei Inc.
  • Jiajun Deng University of Adelaide, Australian Institute for Machine Learning
  • Mingbo Jia Huawei Inc.

DOI:

https://doi.org/10.1609/aaai.v38i6.28363

Keywords:

CV: Language and Vision, CV: Multi-modal Vision, NLP: Language Grounding & Multi-modal NLP

Abstract

We present that visual grounding and image captioning, which perform as two mutually inverse processes, can be bridged together for collaborative training by careful designs. By consolidating this idea, we introduce CyCo, a cyclic-consistent learning framework to ameliorate the independent training pipelines of visual grounding and image captioning. The proposed framework (1) allows the semi-weakly supervised training of visual grounding; (2) improves the performance of fully supervised visual grounding; (3) yields a general captioning model that can describe arbitrary image regions. Extensive experiments show that our fully supervised grounding model achieves state-of-the-art performance, and the semi-weakly supervised one also exhibits competitive performance compared to the fully supervised counterparts. Our image captioning model has the capability to freely describe image regions and meanwhile shows impressive performance on prevalent captioning benchmarks.

Published

2024-03-24

How to Cite

Wang, N., Deng, J., & Jia, M. (2024). Cycle-Consistency Learning for Captioning and Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5535-5543. https://doi.org/10.1609/aaai.v38i6.28363

Issue

Section

AAAI Technical Track on Computer Vision V