ARNet: Self-Supervised FG-SBIR with Unified Sample Feature Alignment and Multi-Scale Token Recycling

Authors

  • Jianan Jiang Hunan University ExponentiAI Innovation
  • Hao Tang Peking University
  • Zhilin Jiang Hunan University
  • Weiren Yu University of Warwick
  • Di Wu Hunan University ExponentiAI Innovation

DOI:

https://doi.org/10.1609/aaai.v39i4.32417

Abstract

Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) aims to minimize the distance between sketches and corresponding images in the embedding space. However, scalability is hindered by the growing complexity of solutions, mainly due to the abstract nature of fine-grained sketches. In this paper, we propose an effective approach to narrow the gap between the two domains. It mainly facilitates unified mutual information sharing both intra- and inter-samples, rather than treating them as a single feature alignment problem between modalities. Specifically, our approach includes: (i) Employing dual weight-sharing networks to optimize alignment within the sketch and image domain, which also effectively mitigates model learning saturation issues. (ii) Introducing an objective optimization function based on contrastive loss to enhance the model's ability to align features in both intra- and inter-samples. (iii) Presenting a self-supervised Multi-Scale Token Recycling (MSTR) Module featured by recycling discarded patch tokens in multi-scale features, further enhancing representation capability and retrieval performance. Our framework achieves excellent results on CNN- and ViT-based backbones. Extensive experiments demonstrate its superiority over existing methods. We also introduce Cloths-V1, the first professional fashion sketch-image dataset, utilized to validate our method and will be beneficial for other applications.

Downloads

Published

2025-04-11

How to Cite

Jiang, J., Tang, H., Jiang, Z., Yu, W., & Wu, D. (2025). ARNet: Self-Supervised FG-SBIR with Unified Sample Feature Alignment and Multi-Scale Token Recycling. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 3985-3993. https://doi.org/10.1609/aaai.v39i4.32417

Issue

Section

AAAI Technical Track on Computer Vision III