Global rates of convergence for nonconvex optimization on manifolds

From MaRDI portal
Publication:5113326

DOI10.1093/IMANUM/DRX080zbMATH Open1483.65092arXiv1605.08101OpenAlexW2402588523MaRDI QIDQ5113326FDOQ5113326

Nicolas Boumal, P.-A. Absil, Coralia Cartis

Publication date: 4 June 2020

Published in: IMA Journal of Numerical Analysis (Search for Journal in Brave)

Abstract: We consider the minimization of a cost function f on a manifold M using Riemannian gradient descent and Riemannian trust regions (RTR). We focus on satisfying necessary optimality conditions within a tolerance varepsilon. Specifically, we show that, under Lipschitz-type assumptions on the pullbacks of f to the tangent spaces of M, both of these algorithms produce points with Riemannian gradient smaller than varepsilon in O(1/varepsilon2) iterations. Furthermore, RTR returns a point where also the Riemannian Hessian's least eigenvalue is larger than βˆ’varepsilon in O(1/varepsilon3) iterations. There are no assumptions on initialization. The rates match their (sharp) unconstrained counterparts as a function of the accuracy varepsilon (up to constants) and hence are sharp in that sense. These are the first deterministic results for global rates of convergence to approximate first- and second-order Karush-Kuhn-Tucker points on manifolds. They apply in particular for optimization constrained to compact submanifolds of mathbbRn, under simpler assumptions.


Full work available at URL: https://arxiv.org/abs/1605.08101






Cited In (63)


   Recommendations





This page was built for publication: Global rates of convergence for nonconvex optimization on manifolds

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5113326)