Adaptive quadratically regularized Newton method for Riemannian optimization

From MaRDI portal
Publication:3176355

DOI10.1137/17M1142478zbMATH Open1415.65139arXiv1708.02016OpenAlexW2884819278WikidataQ115246937 ScholiaQ115246937MaRDI QIDQ3176355FDOQ3176355


Authors: Jiang Hu, Andre Milzarek, Zaiwen Wen, Yaxiang Yuan Edit this on Wikidata


Publication date: 20 July 2018

Published in: SIAM Journal on Matrix Analysis and Applications (Search for Journal in Brave)

Abstract: Optimization on Riemannian manifolds widely arises in eigenvalue computation, density functional theory, Bose-Einstein condensates, low rank nearest correlation, image registration, and signal processing, etc. We propose an adaptive regularized Newton method which approximates the original objective function by the second-order Taylor expansion in Euclidean space but keeps the Riemannian manifold constraints. The regularization term in the objective function of the subproblem enables us to establish a Cauchy-point like condition as the standard trust-region method for proving global convergence. The subproblem can be solved inexactly either by first-order methods or a modified Riemannian Newton method. In the later case, it can further take advantage of negative curvature directions. Both global convergence and superlinear local convergence are guaranteed under mild conditions. Extensive computational experiments and comparisons with other state-of-the-art methods indicate that the proposed algorithm is very promising.


Full work available at URL: https://arxiv.org/abs/1708.02016




Recommendations




Cites Work


Cited In (36)

Uses Software





This page was built for publication: Adaptive quadratically regularized Newton method for Riemannian optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3176355)