Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
DOI10.1007/S10107-009-0337-YzbMATH Open1229.90193OpenAlexW1994974865WikidataQ58185744 ScholiaQ58185744MaRDI QIDQ652287FDOQ652287
Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L. Toint
Publication date: 14 December 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-009-0337-y
Recommendations
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- ARC\(_q\): a new adaptive regularization by cubics
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Complexity bounds for second-order optimality in unconstrained optimization
nonlinear optimizationunconstrained optimizationglobal rate of convergenceNewton's methodtrust-region methodscubic regularizationglobal complexity bounds
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15) Implicit function theorems; global Newton methods on manifolds (58C15)
Cites Work
- Title not available (Why is that?)
- Numerical Optimization
- Title not available (Why is that?)
- Introductory lectures on convex optimization. A basic course.
- Trust Region Methods
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Cubic regularization of Newton method and its global performance
- Accelerating the cubic regularization of Newton's method on convex problems
- A recursive Formula-trust-region method for bound-constrained nonlinear optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
Cited In (only showing first 100 items - show all)
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- On a global complexity bound of the Levenberg-marquardt method
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- Cubic overestimation and secant updating for unconstrained optimization of \(C^{2,1}\) functions
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- Worst case complexity of direct search
- Worst case complexity of direct search under convexity
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- The use of quadratic regularization with a cubic descent condition for unconstrained optimization
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Concise complexity analyses for trust region methods
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Algebraic rules for quadratic regularization of Newton's method
- Global complexity bound of the inexact Levenberg-Marquardt method
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Global complexity bound of the Levenberg-Marquardt method
- Interior-point methods for nonconvex nonlinear programming: cubic regularization
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem
- ARC\(_q\): a new adaptive regularization by cubics
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- On regularization and active-set methods with complexity for constrained optimization
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Recent advances in trust region algorithms
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- Recent Theoretical Advances in Non-Convex Optimization
- A second-order globally convergent direct-search method and its worst-case complexity
- On the complexity of an inexact restoration method for constrained optimization
- On the use of iterative methods in cubic regularization for unconstrained optimization
- On the complexity of solving feasibility problems with regularized models
- Stochastic variance-reduced cubic regularization methods
- \texttt{trlib}: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- A trust region method for finding second-order stationarity in linearly constrained nonconvex optimization
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- First-Order Methods for Nonconvex Quadratic Minimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- A Newton-based method for nonconvex optimization with fast evasion of saddle points
- Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- A cubic regularization of Newton's method with finite difference Hessian approximations
- Accelerated regularized Newton methods for minimizing composite convex functions
- Gradient descent finds the cubic-regularized nonconvex Newton step
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- Newton-type methods for non-convex optimization under inexact Hessian information
- A generalized worst-case complexity analysis for non-monotone line searches
- Minimizing uniformly convex functions by cubic regularization of Newton method
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- On high-order multilevel optimization strategies
- Smoothness parameter of power of Euclidean norm
- Adaptive regularization with cubics on manifolds
- Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy
- Trust-region Newton-CG with strong second-order complexity guarantees for nonconvex optimization
- A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization
- Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints
- New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization
- Several accelerated subspace minimization conjugate gradient methods based on regularization model and convergence rate analysis for nonconvex problems
- Implementable tensor methods in unconstrained convex optimization
- On global minimizers of quadratic functions with cubic regularization
- Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives
- Combining stochastic adaptive cubic regularization with negative curvature for nonconvex optimization
- Cubic regularization methods with second-order complexity guarantee based on a new subproblem reformulation
- Second-order guarantees of distributed gradient algorithms
- Solving the cubic regularization model by a nested restarting Lanczos method
- Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q652287)