Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
DOI10.1007/S10107-009-0337-YzbMATH Open1229.90193OpenAlexW1994974865WikidataQ58185744 ScholiaQ58185744MaRDI QIDQ652287FDOQ652287
Nicholas I. M. Gould, Coralia Cartis, Philippe L. Toint
Publication date: 14 December 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-009-0337-y
nonlinear optimizationunconstrained optimizationglobal rate of convergenceNewton's methodtrust-region methodscubic regularizationglobal complexity bounds
Numerical mathematical programming methods (65K05) Analysis of algorithms and problem complexity (68Q25) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37) Abstract computational complexity for mathematical programming problems (90C60) Newton-type methods (49M15) Implicit function theorems; global Newton methods on manifolds (58C15)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Numerical Optimization
- Introductory lectures on convex optimization. A basic course.
- Trust Region Methods
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Cubic regularization of Newton method and its global performance
- Accelerating the cubic regularization of Newton's method on convex problems
- A recursive Formula-trust-region method for bound-constrained nonlinear optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
Cited In (only showing first 100 items - show all)
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Solving Large-Scale Cubic Regularization by a Generalized Eigenvalue Problem
- On High-order Model Regularization for Constrained Optimization
- ARCq: a new adaptive regularization by cubics
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis
- Cubic regularization in symmetric rank-1 quasi-Newton methods
- On a global complexity bound of the Levenberg-marquardt method
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization
- Worst case complexity of direct search
- Worst case complexity of direct search under convexity
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- On Regularization and Active-set Methods with Complexity for Constrained Optimization
- A new regularized quasi-Newton algorithm for unconstrained optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- A derivative-free trust-region algorithm for composite nonsmooth optimization
- Concise complexity analyses for trust region methods
- Regional complexity analysis of algorithms for nonconvex smooth optimization
- Worst-case complexity bounds of directional direct-search methods for multiobjective optimization
- On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization
- A smoothing SQP framework for a class of composite \(L_q\) minimization over polyhedron
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Global complexity bound of the Levenberg–Marquardt method
- Newton-type methods for non-convex optimization under inexact Hessian information
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- An interior affine scaling cubic regularization algorithm for derivative-free optimization subject to bound constraints
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Algebraic rules for quadratic regularization of Newton's method
- Global complexity bound of the inexact Levenberg-Marquardt method
- On the worst-case evaluation complexity of non-monotone line search algorithms
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Cubic overestimation and secant updating for unconstrained optimization ofC2, 1functions
- trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Interior-point methods for nonconvex nonlinear programming: cubic regularization
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- A new augmented Lagrangian method for equality constrained optimization with simple unconstrained subproblem
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Title not available (Why is that?)
- A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization
- An improvement of adaptive cubic regularization method for unconstrained optimization problems
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- Worst-case evaluation complexity of regularization methods for smooth unconstrained optimization using Hölder continuous gradients
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models
- A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- Recent advances in trust region algorithms
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- Recent Theoretical Advances in Non-Convex Optimization
- A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points
- Complexity of Partially Separable Convexly Constrained Optimization with Non-Lipschitzian Singularities
- A second-order globally convergent direct-search method and its worst-case complexity
- On the use of iterative methods in cubic regularization for unconstrained optimization
- On the complexity of solving feasibility problems with regularized models
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- An accelerated first-order method for non-convex optimization on manifolds
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Parameter-free accelerated gradient descent for nonconvex minimization
- On High-Order Multilevel Optimization Strategies
- Super-Universal Regularized Newton Method
- A Riemannian dimension-reduced second-order method with application in sensor network localization
- Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound
- On the complexity of a stochastic Levenberg-Marquardt method
- First-Order Methods for Nonconvex Quadratic Minimization
- Gradient descent in the absence of global Lipschitz continuity of the gradients
- Inexact tensor methods and their application to stochastic convex optimization
- Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Two modified adaptive cubic regularization algorithms by using the nonmonotone Armijo-type line search
- Second-Order Guarantees of Distributed Gradient Algorithms
- Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization
- Scalable adaptive cubic regularization methods
- Projected adaptive cubic regularization algorithm with derivative-free filter technique for box constrained optimization
- Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods
- A cubic regularization of Newton's method with finite difference Hessian approximations
- An adaptive regularization method in Banach spaces
- A note on inexact gradient and Hessian conditions for cubic regularized Newton's method
- A generalized worst-case complexity analysis for non-monotone line searches
- Minimizing uniformly convex functions by cubic regularization of Newton method
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
- Solving the Cubic Regularization Model by a Nested Restarting Lanczos Method
- Smoothness parameter of power of Euclidean norm
- Adaptive regularization with cubics on manifolds
This page was built for publication: Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q652287)