Local and Global Convergence of General Burer-Monteiro Tensor Optimizations

Authors

  • Shuang Li University of California, Los Angeles
  • Qiuwei Li Damo Academy, Alibaba Group US

DOI:

https://doi.org/10.1609/aaai.v36i9.21267

Keywords:

Search And Optimization (SO), Constraint Satisfaction And Optimization (CSO)

Abstract

Tensor optimization is crucial to massive machine learning and signal processing tasks. In this paper, we consider tensor optimization with a convex and well-conditioned objective function and reformulate it into a nonconvex optimization using the Burer-Monteiro type parameterization. We analyze the local convergence of applying vanilla gradient descent to the factored formulation and establish a local regularity condition under mild assumptions. We also provide a linear convergence analysis of the gradient descent algorithm started in a neighborhood of the true tensor factors. Complementary to the local analysis, this work also characterizes the global geometry of the best rank-one tensor approximation problem and demonstrates that for orthogonally decomposable tensors the problem has no spurious local minima and all saddle points are strict except for the one at zero which is a third-order saddle point.

Downloads

Published

2022-06-28

How to Cite

Li, S., & Li, Q. (2022). Local and Global Convergence of General Burer-Monteiro Tensor Optimizations. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 10266-10274. https://doi.org/10.1609/aaai.v36i9.21267

Issue

Section

AAAI Technical Track on Search and Optimization