Zero-Shot Sketch-Based Image Retrieval via Graph Convolution Network

Authors

  • Zhaolong Zhang Fudan University
  • Yuejie Zhang Fudan University
  • Rui Feng Fudan University
  • Tao Zhang Shanghai University of Finance and Economics
  • Weiguo Fan University of Iowa

DOI:

https://doi.org/10.1609/aaai.v34i07.6993

Abstract

Zero-Shot Sketch-based Image Retrieval (ZS-SBIR) has been proposed recently, putting the traditional Sketch-based Image Retrieval (SBIR) under the setting of zero-shot learning. Dealing with both the challenges in SBIR and zero-shot learning makes it become a more difficult task. Previous works mainly focus on utilizing one kind of information, i.e., the visual information or the semantic information. In this paper, we propose a SketchGCN model utilizing the graph convolution network, which simultaneously considers both the visual information and the semantic information. Thus, our model can effectively narrow the domain gap and transfer the knowledge. Furthermore, we generate the semantic information from the visual information using a Conditional Variational Autoencoder rather than only map them back from the visual space to the semantic space, which enhances the generalization ability of our model. Besides, feature loss, classification loss, and semantic loss are introduced to optimize our proposed SketchGCN model. Our model gets a good performance on the challenging Sketchy and TU-Berlin datasets.

Downloads

Published

2020-04-03

How to Cite

Zhang, Z., Zhang, Y., Feng, R., Zhang, T., & Fan, W. (2020). Zero-Shot Sketch-Based Image Retrieval via Graph Convolution Network. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 12943-12950. https://doi.org/10.1609/aaai.v34i07.6993

Issue

Section

AAAI Technical Track: Vision