Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones Is Enough

Authors

  • Zhuo Li Institute of Computing Technology, Chinese Academy of Sciences
  • Weiqing Min Institute of Computing Technology, Chinese Academy of Sciences
  • Jiajun Song Institute of Computing Technology, Chinese Academy of Sciences
  • Yaohui Zhu Institute of Computing Technology, Chinese Academy of Sciences
  • Liping Kang Meituan
  • Xiaoming Wei Meituan
  • Xiaolin Wei Meituan
  • Shuqiang Jiang Institute of Computing Technology, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v36i2.20042

Keywords:

Computer Vision (CV)

Abstract

Optimising the approximation of Average Precision (AP) has been widely studied for image retrieval. Limited by the definition of AP, such methods consider both negative and positive instances ranking before each positive instance. However, we claim that only penalizing negative instances before positive ones is enough, because the loss only comes from these negative instances. To this end, we propose a novel loss, namely Penalizing Negative instances before Positive ones (PNP), which can directly minimize the number of negative instances before each positive one. In addition, AP-based methods adopt a fixed and sub-optimal gradient assignment strategy. Therefore, we systematically investigate different gradient assignment solutions via constructing derivative functions of the loss, resulting in PNP-I with increasing derivative functions and PNP-D with decreasing ones. PNP-I focuses more on the hard positive instances by assigning larger gradients to them and tries to make all relevant instances closer. In contrast, PNP-D pays less attention to such instances and slowly corrects them. For most real-world data, one class usually contains several local clusters. PNP-I blindly gathers these clusters while PNP-D keeps them as they were. Therefore, PNP-D is more superior. Experiments on three standard retrieval datasets show consistent results with the above analysis. Extensive evaluations demonstrate that PNP-D achieves the state-of-the-art performance. Code is available at https://github.com/interestingzhuo/PNPloss

Downloads

Published

2022-06-28

How to Cite

Li, Z., Min, W., Song, J., Zhu, Y., Kang, L., Wei, X., Wei, X., & Jiang, S. (2022). Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones Is Enough. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1518-1526. https://doi.org/10.1609/aaai.v36i2.20042

Issue

Section

AAAI Technical Track on Computer Vision II