Scalable and Efficient Pairwise Learning to Achieve Statistical Accuracy


  • Bin Gu University of Pittsburgh
  • Zhouyuan Huo University of Pittsburgh
  • Heng Huang University of Pittsburgh



Pairwise learning is an important learning topic in the machine learning community, where the loss function involves pairs of samples (e.g., AUC maximization and metric learning). Existing pairwise learning algorithms do not perform well in the generality, scalability and efficiency simultaneously. To address these challenging problems, in this paper, we first analyze the relationship between the statistical accuracy and the regularized empire risk for pairwise loss. Based on the relationship, we propose a scalable and efficient adaptive doubly stochastic gradient algorithm (AdaDSG) for generalized regularized pairwise learning problems. More importantly, we prove that the overall computational cost of AdaDSG is O(n) to achieve the statistical accuracy on the full training set with the size of n, which is the best theoretical result for pairwise learning to the best of our knowledge. The experimental results on a variety of real-world datasets not only confirm the effectiveness of our AdaDSG algorithm, but also show that AdaDSG has significantly better scalability and efficiency than the existing pairwise learning algorithms.




How to Cite

Gu, B., Huo, Z., & Huang, H. (2019). Scalable and Efficient Pairwise Learning to Achieve Statistical Accuracy. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 3697-3704.



AAAI Technical Track: Machine Learning