Polynomial Optimization Methods for Matrix Factorization

Authors

  • Po-Wei Wang Carnegie Mellon University
  • Chun-Liang Li Carnegie Mellon University
  • J. Kolter Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v31i1.10941

Keywords:

polynomial optimization, matrix factorization, Durand-Kerner method

Abstract

Matrix factorization is a core technique in many machine learning problems, yet also presents a nonconvex and often difficult-to-optimize problem. In this paper we present an approach based upon polynomial optimization techniques that both improves the convergence time of matrix factorization algorithms and helps them escape from local optima. Our method is based on the realization that given a joint search direction in a matrix factorization task, we can solve the ``subspace search'' problem (the task of jointly finding the steps to take in each direction) by solving a bivariate quartic polynomial optimization problem. We derive two methods for solving this problem based upon sum of squares moment relaxations and the Durand-Kerner method, then apply these techniques on matrix factorization to derive a direct coordinate descent approach and a method for speeding up existing approaches. On three benchmark datasets we show the method substantially improves convergence speed over state-of-the-art approaches, while also attaining lower objective value.

Downloads

Published

2017-02-13

How to Cite

Wang, P.-W., Li, C.-L., & Kolter, J. (2017). Polynomial Optimization Methods for Matrix Factorization. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10941