Softmax Dissection: Towards Understanding Intra- and Inter-Class Objective for Embedding Learning

Authors

  • Lanqing He Tsinghua University
  • Zhongdao Wang Tsinghua University
  • Yali Li Tsinghua University
  • Shengjin Wang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i07.6729

Abstract

The softmax loss and its variants are widely used as objectives for embedding learning applications like face recognition. However, the intra- and inter-class objectives in Softmax are entangled, therefore a well-optimized inter-class objective leads to relaxation on the intra-class objective, and vice versa. In this paper, we propose to dissect Softmax into independent intra- and inter-class objective (D-Softmax) with a clear understanding. It is straightforward to tune each part to the best state with D-Softmax as objective.Furthermore, we find the computation of the inter-class part is redundant and propose sampling-based variants of D-Softmax to reduce the computation cost. The face recognition experiments on regular-scale data show D-Softmax is favorably comparable to existing losses such as SphereFace and ArcFace. Experiments on massive-scale data show the fast variants significantly accelerates the training process (such as 64×) with only a minor sacrifice in performance, outperforming existing acceleration methods of Softmax in terms of both performance and efficiency.

Downloads

Published

2020-04-03

How to Cite

He, L., Wang, Z., Li, Y., & Wang, S. (2020). Softmax Dissection: Towards Understanding Intra- and Inter-Class Objective for Embedding Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10957-10964. https://doi.org/10.1609/aaai.v34i07.6729

Issue

Section

AAAI Technical Track: Vision