Efficient Stochastic Optimization for Low-Rank Distance Metric Learning

Authors

  • Jie Zhang Nanjing University
  • Lijun Zhang Nanjing University

DOI:

https://doi.org/10.1609/aaai.v31i1.10649

Keywords:

Stochastic Optimization, Low-Rank DML, Incremental SVD

Abstract

Although distance metric learning has been successfully applied to many real-world applications, learning a distance metric from large-scale and high-dimensional data remains a challenging problem. Due to the PSD constraint, the computational complexity of previous algorithms per iteration is at least O(d2) where d is the dimensionality of the data.In this paper, we develop an efficient stochastic algorithm  for a class of distance metric learning problems with nuclear norm regularization, referred to as low-rank DML. By utilizing the low-rank structure of the intermediate solutions and stochastic gradients, the complexity of our algorithm has a linear dependence on the dimensionality d. The key idea is to maintain all the iterates  in factorized representations  and construct  stochastic gradients that are low-rank. In this way, the projection onto the PSD cone can be implemented efficiently by incremental SVD. Experimental results on several data sets validate the effectiveness and efficiency of our method.

Downloads

Published

2017-02-12

How to Cite

Zhang, J., & Zhang, L. (2017). Efficient Stochastic Optimization for Low-Rank Distance Metric Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10649

Issue

Section

AAAI Technical Track: Heuristic Search and Optimization