Action-Aware Embedding Enhancement for Image-Text Retrieval
Keywords:Computer Vision (CV)
AbstractImage-text retrieval plays a central role in bridging vision and language, which aims to reduce the semantic discrepancy between images and texts. Most of existing works rely on refined words and objects representation through the data-oriented method to capture the word-object cooccurrence. Such approaches are prone to ignore the asymmetric action relation between images and texts, that is, the text has explicit action representation (i.e., verb phrase) while the image only contains implicit action information. In this paper, we propose Action-aware Memory-Enhanced embedding (AME) method for image-text retrieval, which aims to emphasize the action information when mapping the images and texts into a shared embedding space. Specifically, we integrate action prediction along with an action-aware memory bank to enrich the image and text features with action-similar text features. The effectiveness of our proposed AME method is verified by comprehensive experimental results on two benchmark datasets.
How to Cite
Li, J., Niu, L., & Zhang, L. (2022). Action-Aware Embedding Enhancement for Image-Text Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1323-1331. https://doi.org/10.1609/aaai.v36i2.20020
AAAI Technical Track on Computer Vision II