Compressed Self-Attention for Deep Metric Learning


  • Ziye Chen Wuhan University
  • Mingming Gong University of Melbourne
  • Yanwu Xu University of Pittsburgh
  • Chaohui Wang Universit√© Paris-Est
  • Kun Zhang Carnegie Mellon University
  • Bo Du Wuhan University



In this paper, we aim to enhance self-attention (SA) mechanism for deep metric learning in visual perception, by capturing richer contextual dependencies in visual data. To this end, we propose a novel module, named compressed self-attention (CSA), which significantly reduces the computation and memory cost with a neglectable decrease in accuracy with respect to the original SA mechanism, thanks to the following two characteristics: i) it only needs to compute a small number of base attention maps for a small number of base feature vectors; and ii) the output at each spatial location can be simply obtained by an adaptive weighted average of the outputs calculated from the base attention maps. The high computational efficiency of CSA enables the application to high-resolution shallow layers in convolutional neural networks with little additional cost. In addition, CSA makes it practical to further partition the feature maps into groups along the channel dimension and compute attention maps for features in each group separately, thus increasing the diversity of long-range dependencies and accordingly boosting the accuracy. We evaluate the performance of CSA via extensive experiments on two metric learning tasks: person re-identification and local descriptor learning. Qualitative and quantitative comparisons with latest methods demonstrate the significance of CSA in this topic.




How to Cite

Chen, Z., Gong, M., Xu, Y., Wang, C., Zhang, K., & Du, B. (2020). Compressed Self-Attention for Deep Metric Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3561-3568.



AAAI Technical Track: Machine Learning