Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization
DOI10.1137/19M130563XzbMath1461.90107arXiv1912.04365OpenAlexW3126946265MaRDI QIDQ5853562
Daniel P. Robinson, C. W. Royer, Stephen J. Wright, Frank E. Curtis
Publication date: 10 March 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1912.04365
Newton's methodLanczos methodconjugate gradient methodnegative curvatureworst-case complexitytrust-region methodssmooth nonconvex optimization
Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Numerical methods based on necessary conditions (49M05) Newton-type methods (49M15)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- A linear-time algorithm for trust region problems
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- A linear-time algorithm for the trust region subproblem based on hidden convexity
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Cubic regularization of Newton method and its global performance
- Newton-Type Minimization via the Lanczos Method
- Computing a Trust Region Step
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Probabilistic Bounds on the Extremal Eigenvalues and Condition Number by the Lanczos Algorithm
- Accelerated Methods for NonConvex Optimization
- ARCq: a new adaptive regularization by cubics
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Solving the Trust-Region Subproblem using the Lanczos Method
- Finding approximate local minima faster than gradient descent
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- On the Generalized Lanczos Trust-Region Method
- An inexact regularized Newton framework with a worst-case iteration complexity of $ {\mathscr O}(\varepsilon^{-3/2}) $ for nonconvex optimization
- Benchmarking optimization software with performance profiles.