Ranking-Based Deep Cross-Modal Hashing


  • Xuanwu Liu Southwest University
  • Guoxian Yu Southwest University
  • Carlotta Domeniconi George Mason University
  • Jun Wang Southwest University
  • Yazhou Ren University of Electronic Science and Technology of China
  • Maozu Guo Beijing University of Civil Engineering and Architecture




Cross-modal hashing has been receiving increasing interests for its low storage cost and fast query speed in multi-modal data retrievals. However, most existing hashing methods are based on hand-crafted or raw level features of objects, which may not be optimally compatible with the coding process. Besides, these hashing methods are mainly designed to handle simple pairwise similarity. The complex multilevel ranking semantic structure of instances associated with multiple labels has not been well explored yet. In this paper, we propose a ranking-based deep cross-modal hashing approach (RDCMH). RDCMH firstly uses the feature and label information of data to derive a semi-supervised semantic ranking list. Next, to expand the semantic representation power of hand-crafted features, RDCMH integrates the semantic ranking information into deep cross-modal hashing and jointly optimizes the compatible parameters of deep feature representations and of hashing functions. Experiments on real multi-modal datasets show that RDCMH outperforms other competitive baselines and achieves the state-of-the-art performance in cross-modal retrieval applications.




How to Cite

Liu, X., Yu, G., Domeniconi, C., Wang, J., Ren, Y., & Guo, M. (2019). Ranking-Based Deep Cross-Modal Hashing. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4400-4407. https://doi.org/10.1609/aaai.v33i01.33014400



AAAI Technical Track: Machine Learning