Cross-Modal Coherence for Text-to-Image Retrieval
DOI:
https://doi.org/10.1609/aaai.v36i10.21285Keywords:
Speech & Natural Language Processing (SNLP), Computer Vision (CV)Abstract
Common image-text joint understanding techniques presume that images and the associated text can universally be characterized by a single implicit model. However, co-occurring images and text can be related in qualitatively different ways, and explicitly modeling it could improve the performance of current joint understanding models. In this paper, we train a Cross-Modal Coherence Model for text-to-image retrieval task. Our analysis shows that models trained with image–text coherence relations can retrieve images originally paired with target text more often than coherence-agnostic models. We also show via human evaluation that images retrieved by the proposed coherence-aware model are preferred over a coherence-agnostic baseline by a huge margin. Our findings provide insights into the ways that different modalities communicate and the role of coherence relations in capturing commonsense inferences in text and imagery.Downloads
Published
2022-06-28
How to Cite
Alikhani, M., Han, F., Ravi, H., Kapadia, M., Pavlovic, V., & Stone, M. (2022). Cross-Modal Coherence for Text-to-Image Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10427-10435. https://doi.org/10.1609/aaai.v36i10.21285
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing