On the quadratic convergence of the cubic regularization method under a local error bound condition
From MaRDI portal
Publication:4634142
Abstract: In this paper we consider the cubic regularization (CR) method for minimizing a twice continuously differentiable function. While the CR method is widely recognized as a globally convergent variant of Newton's method with superior iteration complexity, existing results on its local quadratic convergence require a stringent non-degeneracy condition. We prove that under a local error bound (EB) condition, which is much weaker a requirement than the existing non-degeneracy condition, the sequence of iterates generated by the CR method converges at least Q-quadratically to a second-order critical point. This indicates that adding a cubic regularization not only equips Newton's method with remarkable global convergence properties but also enables it to converge quadratically even in the presence of degenerate solutions. As a byproduct, we show that without assuming convexity, the proposed EB condition is equivalent to a quadratic growth condition, which could be of independent interest. To demonstrate the usefulness and relevance of our convergence analysis, we focus on two concrete nonconvex optimization problems that arise in phase retrieval and low-rank matrix recovery, respectively, and prove that with overwhelming probability, the sequence of iterates generated by the CR method for solving these two problems converges at least Q-quadratically to a global minimizer. We also present numerical results of the CR method when applied to solve these two problems to support and complement our theoretical development.
Recommendations
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- Cubic regularization of Newton method and its global performance
- Accelerating the cubic regularization of Newton's method on convex problems
- On global minimizers of quadratic functions with cubic regularization
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
Cites work
- scientific article; zbMATH DE number 1694914 (Why is no real title available?)
- scientific article; zbMATH DE number 3980094 (Why is no real title available?)
- scientific article; zbMATH DE number 1569003 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property
- A geometric analysis of phase retrieval
- A method for the solution of certain non-linear problems in least squares
- A unified approach to error bounds for structured convex optimization problems
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Characterization of metric regularity of subdifferentials
- Cubic regularization of Newton method and its global performance
- Error bounds and convergence analysis of feasible descent methods: A general approach
- From error bounds to the complexity of first-order descent methods for convex functions
- Gradient descent finds the cubic-regularized nonconvex Newton step
- Local behavior of an iterative framework for generalized equations with nonisolated solutions
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption
- Ordinary differential equations.
- Phase retrieval via Wirtinger flow: theory and algorithms
- Recent advances in trust region algorithms
- Regularized Newton methods for convex minimization problems with singular solutions
- Second-order growth, tilt stability, and metric regularity of the subdifferential
- Strong local convergence properties of adaptive regularized methods for nonlinear least squares
- The restricted isometry property and its implications for compressed sensing
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Variational Analysis
Cited in
(9)- An accelerated first-order method with complexity analysis for solving cubic regularization subproblems
- A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
- Nonconvex Robust Low-Rank Matrix Recovery
- On high-order multilevel optimization strategies
- A unified approach to synchronization problems over subgroups of the orthogonal group
- GNMR: a provable one-line algorithm for low rank matrix recovery
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
This page was built for publication: On the quadratic convergence of the cubic regularization method under a local error bound condition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4634142)