A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
DOI10.1007/s10107-019-01362-7zbMath1448.90081arXiv1803.02924OpenAlexW2963900797WikidataQ128532682 ScholiaQ128532682MaRDI QIDQ2297654
Publication date: 20 February 2020
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1803.02924
unconstrained optimizationNewton's methodconjugate gradient methodsecond-order optimality conditionsfirst-order optimality conditionsworst-case complexitysmooth nonconvex optimization
Numerical computation of eigenvalues and eigenvectors of matrices (65F15) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Numerical optimization and variational techniques (65K10) Methods of quasi-Newton type (90C53) Iterative numerical methods for linear systems (65F10)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Complexity bounds for second-order optimality in unconstrained optimization
- A nonmonotone truncated Newton-Krylov method exploiting negative curvature directions, for large scale unconstrained optimization
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Cubic regularization of Newton method and its global performance
- Truncated-Newton algorithms for large-scale unconstrained optimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Evaluating Derivatives
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Trust Region Methods
- Exploiting negative curvature directions in linesearch methods for unconstrained optimization
- Accelerated Methods for NonConvex Optimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Finding approximate local minima faster than gradient descent
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- An inexact regularized Newton framework with a worst-case iteration complexity of $ {\mathscr O}(\varepsilon^{-3/2}) $ for nonconvex optimization