Action-Aware Embedding Enhancement for Image-Text Retrieval

Authors

  • Jiangtong Li Shanghai Jiao Tong University
  • Li Niu Shanghai Jiao Tong University
  • Liqing Zhang Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v36i2.20020

Keywords:

Computer Vision (CV)

Abstract

Image-text retrieval plays a central role in bridging vision and language, which aims to reduce the semantic discrepancy between images and texts. Most of existing works rely on refined words and objects representation through the data-oriented method to capture the word-object cooccurrence. Such approaches are prone to ignore the asymmetric action relation between images and texts, that is, the text has explicit action representation (i.e., verb phrase) while the image only contains implicit action information. In this paper, we propose Action-aware Memory-Enhanced embedding (AME) method for image-text retrieval, which aims to emphasize the action information when mapping the images and texts into a shared embedding space. Specifically, we integrate action prediction along with an action-aware memory bank to enrich the image and text features with action-similar text features. The effectiveness of our proposed AME method is verified by comprehensive experimental results on two benchmark datasets.

Downloads

Published

2022-06-28

How to Cite

Li, J., Niu, L., & Zhang, L. (2022). Action-Aware Embedding Enhancement for Image-Text Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1323-1331. https://doi.org/10.1609/aaai.v36i2.20020

Issue

Section

AAAI Technical Track on Computer Vision II