Similarity Preserving Deep Asymmetric Quantization for Image Retrieval


  • Junjie Chen Hong Kong Baptist University
  • William K. Cheung Hong Kong Baptist University



Quantization has been widely adopted for large-scale multimedia retrieval due to its effectiveness of coding highdimensional data. Deep quantization models have been demonstrated to achieve the state-of-the-art retrieval accuracy. However, training the deep models given a large-scale database is highly time-consuming as a large amount of parameters are involved. Existing deep quantization methods often sample only a subset from the database for training, which may end up with unsatisfactory retrieval performance as a large portion of label information is discarded. To alleviate this problem, we propose a novel model called Similarity Preserving Deep Asymmetric Quantization (SPDAQ) which can directly learn the compact binary codes and quantization codebooks for all the items in the database efficiently. To do that, SPDAQ makes use of an image subset as well as the label information of all the database items so the image subset items and the database items are mapped to two different but correlated distributions, where the label similarity can be well preserved. An efficient optimization algorithm is proposed for the learning. Extensive experiments conducted on four widely-used benchmark datasets demonstrate the superiority of our proposed SPDAQ model.




How to Cite

Chen, J., & Cheung, W. K. (2019). Similarity Preserving Deep Asymmetric Quantization for Image Retrieval. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8183-8190.



AAAI Technical Track: Vision