Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
DOI10.1137/22m1492428zbMath1522.90119arXiv2204.11322OpenAlexW4385985705MaRDI QIDQ6046826
No author found.
Publication date: 6 September 2023
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2204.11322
nonlinear optimizationnonconvex optimizationtrust-region methodsworst-case iteration complexityworst-case evaluation complexity
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Numerical optimization and variational techniques (65K10) Complexity and performance of numerical algorithms (65Y20)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Concise complexity analyses for trust region methods
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Cubic regularization of Newton method and its global performance
- Solving the Trust-Region Subproblem By a Generalized Eigenvalue Problem
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Computing a Trust Region Step
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Inexact Newton Methods
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Probabilistic Bounds on the Extremal Eigenvalues and Condition Number by the Lanczos Algorithm
- Trust Region Methods
- Accelerated Methods for NonConvex Optimization
- ARCq: a new adaptive regularization by cubics
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Solving the Trust-Region Subproblem using the Lanczos Method
- Evaluation Complexity of Algorithms for Nonconvex Optimization: Theory, Computation and Perspectives
- Error estimates for iterative algorithms for minimizing regularized quadratic subproblems
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- On the Generalized Lanczos Trust-Region Method
- Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization
- An inexact regularized Newton framework with a worst-case iteration complexity of $ {\mathscr O}(\varepsilon^{-3/2}) $ for nonconvex optimization
- The Convergence of the Generalized Lanczos Trust-Region Method for the Trust-Region Subproblem
- Benchmarking optimization software with performance profiles.
This page was built for publication: Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization