On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization

From MaRDI portal
Publication:6122575

DOI10.1287/MOOR.2022.1332arXiv2012.10469OpenAlexW4311730201MaRDI QIDQ6122575FDOQ6122575


Authors: Dan Garber, Atara Kaplan Edit this on Wikidata


Publication date: 1 March 2024

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Abstract: Convex optimization over the spectrahedron, i.e., the set of all real nimesn positive semidefinite matrices with unit trace, has important applications in machine learning, signal processing and statistics, mainly as a convex relaxation for optimization problems with low-rank matrices. It is also one of the most prominent examples in the theory of first-order methods for convex optimization in which non-Euclidean methods can be significantly preferable to their Euclidean counterparts. In particular, the desirable choice is the Matrix Exponentiated Gradient (MEG) method which is based on the Bregman distance induced by the (negative) von Neumann entropy. Unfortunately, implementing MEG requires a full SVD computation on each iteration, which is not scalable to high-dimensional problems. In this work we propose an efficient implementations of MEG, both with deterministic and stochastic gradients, which are tailored for optimization with low-rank matrices, and only use a single low-rank SVD computation on each iteration. We also provide efficiently-computable certificates for the correct convergence of our methods. Mainly, we prove that under a strict complementarity condition, the suggested methods converge from a ``warm-start" initialization with similar rates to their full-SVD-based counterparts. Finally, we bring empirical experiments which both support our theoretical findings and demonstrate the practical appeal of our methods.


Full work available at URL: https://arxiv.org/abs/2012.10469












This page was built for publication: On the Efficient Implementation of the Matrix Exponentiated Gradient Algorithm for Low-Rank Matrix Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6122575)