Scalable Sparse Covariance Estimation via Self-Concordance

Authors

  • Anastasios Kyrillidis Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Rabeeh Karimi Mahabadi Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Quoc Tran Dinh Ecole Polytechnique Fédérale de Lausanne (EPFL)
  • Volkan Cevher Ecole Polytechnique Fédérale de Lausanne (EPFL)

DOI:

https://doi.org/10.1609/aaai.v28i1.8960

Keywords:

Inexact proximal Newton methods, Sparse covariance estimation, Self-concordance property

Abstract

We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet metric, a convex data fidelity term h(.) and, a regularizing — possibly non-smooth — function g(.). This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this locally Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.

Downloads

Published

2014-06-21

How to Cite

Kyrillidis, A., Karimi Mahabadi, R., Tran Dinh, Q., & Cevher, V. (2014). Scalable Sparse Covariance Estimation via Self-Concordance. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8960

Issue

Section

Main Track: Novel Machine Learning Algorithms