Context-I2W: Mapping Images to Context-Dependent Words for Accurate Zero-Shot Composed Image Retrieval

Authors

  • Yuanmin Tang Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Jing Yu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Keke Gai School of Cyberspace Science and Technology, Beijing Institute of Technology
  • Jiamin Zhuang Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Gang Xiong Institute of Information Engineering, Chinese Academy of Sciences
  • Yue Hu Institute of Information Engineering,Chinese Academy of Sciences
  • Qi Wu University of Adelaide

DOI:

https://doi.org/10.1609/aaai.v38i6.28324

Keywords:

CV: Language and Vision

Abstract

Different from the Composed Image Retrieval task that requires expensive labels for training task-specific models, Zero-Shot Composed Image Retrieval (ZS-CIR) involves diverse tasks with a broad range of visual content manipulation intent that could be related to domain, scene, object, and attribute. The key challenge for ZS-CIR tasks is to learn a more accurate image representation that has adaptive attention to the reference image for various manipulation descriptions. In this paper, we propose a novel context-dependent mapping network, named Context-I2W, for adaptively converting description-relevant Image information into a pseudo-word token composed of the description for accurate ZS-CIR. Specifically, an Intent View Selector first dynamically learns a rotation rule to map the identical image to a task-specific manipulation view. Then a Visual Target Extractor further captures local information covering the main targets in ZS-CIR tasks under the guidance of multiple learnable queries. The two complementary modules work together to map an image to a context-dependent pseudo-word token without extra supervision. Our model shows strong generalization ability on four ZS-CIR tasks, including domain conversion, object composition, object manipulation, and attribute manipulation. It obtains consistent and significant performance boosts ranging from 1.88% to 3.60% over the best methods and achieves new state-of-the-art results on ZS-CIR. Our code is available at https://anonymous.4open.science/r/Context-I2W-4224/.

Published

2024-03-24

How to Cite

Tang, Y., Yu, J., Gai, K., Zhuang, J., Xiong, G., Hu, Y., & Wu, Q. (2024). Context-I2W: Mapping Images to Context-Dependent Words for Accurate Zero-Shot Composed Image Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5180-5188. https://doi.org/10.1609/aaai.v38i6.28324

Issue

Section

AAAI Technical Track on Computer Vision V